How do you change the way a function is printed? [duplicate] - python

I've only seen examples for setting the __repr__ method in class definitions. Is it possible to change the __repr__ for functions either in their definitions or after defining them?
I've attempted without success...
>>> def f():
pass
>>> f
<function f at 0x1026730c8>
>>> f.__repr__ = lambda: '<New repr>'
>>> f
<function __main__.f>

Yes, if you're willing to forgo the function actually being a function.
First, define a class for our new type:
import functools
class reprwrapper(object):
def __init__(self, repr, func):
self._repr = repr
self._func = func
functools.update_wrapper(self, func)
def __call__(self, *args, **kw):
return self._func(*args, **kw)
def __repr__(self):
return self._repr(self._func)
Add in a decorator function:
def withrepr(reprfun):
def _wrap(func):
return reprwrapper(reprfun, func)
return _wrap
And now we can define the repr along with the function:
#withrepr(lambda x: "<Func: %s>" % x.__name__)
def mul42(y):
return y*42
Now repr(mul42) produces '<Func: mul42>'

No, because repr(f) is done as type(f).__repr__(f) instead.

In order to do that, you'd need to change the __repr__ function for the given class, which in this case is the built-in function class (types.FunctionType). Since in Python you cannot edit built-in classes, only subclass them, you cannot.
However, there are two approaches you could follow:
Wrap some functions as kwatford suggested
Create your own representation protocol with your own repr function. For example, you could define a myrepr function that looks for __myrepr__ methods first, which you cannot add to the function class but you can add it to individual function objects as you suggest (as well as your custom classes and objects), then defaults to repr if __myrepr__ is not found. A possible implementation for this would be:
def myrepr(x):
try:
x.__myrepr__
except AttributeError:
return repr(x)
else:
return x.__myrepr__()
Then you could define __myrepr__ methods and use the myrepr function. Alternatively, you could also do __builtins__.repr = myrepr to make your function the default repr and keep using repr. This approach would end up doing exactly what you want, though editing __builtins__ may not always be desirable.

This appears to be difficult. Kwatford's approach only solves this problem partially since it does not work for functions in classes, becuase self would be treated like a positional argument, as explained in Decorating Python class methods - how do I pass the instance to the decorator? - However, the solution for that question is not applicable to this case, unfortunately, as using __get__() and functools.partial would override the custom __repr__().

Related

Conditional property decorator on class method

I have a python dataclass in which I want to conditionally assign certain decorators depending on some global variable.
The condition is checked at the top of the script, but for my example below, I've simply supplied the result of that checking. If the check is True, I want to give those methods the #functools.cached_property decorator. If it is False, I just want them to receive the standard #property decorator.
The issue I keep running into is that I can't quite figure out how (or if it's even possible) to make this work as a simple decorator. I mostly get errors about method objects when calling or manipulating test.x_times_y, and I'm not sure if it is possible to write the function in such a way that calling test.x_times_y in the example below actually yields the result that I want.
import functools
import dataclasses
_value_checked = False
def myDecorator(func):
def decorator(self):
if not _value_checked:
return property(func)(self)
else:
return functools.cached_property(func)(self)
return decorator
#dataclasses.dataclass
class MyClass():
x: int
y: int
z: int = 0
#myDecorator
def x_times_y(self):
return self.x*self.y
test = MyClass(5,6,7)
I'd also like to avoid getter and setter methods, so I'm hopeful that that is possible. I've looked at many answers on here (such as this one) but haven't been able to find an answers that actually works, as most don't apply to decorating methods. I'm using Python 3.8 for this.
The behavior you want can be implemented with a simple conditional assignment:
my_decorator = functools.cached_property if _value_checked else property
or
if _value_checked:
my_decorator = functools.cached_property
else:
my_decorator = property
If you need to do more complex logic at each use of the decorator, you can use a function that returns the decorator you want:
def my_decorator():
if not _value_checked:
return property
else
return functools.cached_property
No complex argument forwarding required. Just delegate to the decorators you already have.
The way you've written myDecorator it can only be applied to functions that take a single argument:
def myDecorator(func):
def decorator(self):
if not _value_checked:
return property(func)(self)
else:
return functools.cached_property(func)(self)
return decorator
The simplest thing is to just return the function and not call it inside a wrapper:
def myDecorator(func):
if not _value_checked:
return property(func)
else
return functools.cached_property(func)
If you did need to build a wrapper, the generally correct way is to have the wrapper function take arbitrary *args, **kwargs arguments so you can invoke the wrapped function with them:
def myDecorator(func):
def wrapper(*args, **kwargs):
if not _value_checked:
return property(func)(*args, **kwargs)
else:
return functools.cached_property(func)(*args, **kwargs)
return wrapper
Note that the function that myDecorator returns is not itself a decorator, it's a wrapper that replaces the decorated function -- that's why I've renamed it in the above implementation.
Note also that there is a practical difference between these implementations, which is that the second version (with the wrapper) evaluates _value_checked at the time the function is called, whereas the first version evaluates it at the time the function is defined. If that value is a constant it doesn't matter, but if you want to be able to toggle it at runtime and have the behavior change dynamically, you want the second version.

Allow help() to work on partial function object

I'm trying to make sure running help() at the Python 2.7 REPL displays the __doc__ for a function that was wrapped with functools.partial. Currently running help() on a functools.partial 'function' displays the __doc__ of the functools.partial class, not my wrapped function's __doc__. Is there a way to achieve this?
Consider the following callables:
def foo(a):
"""My function"""
pass
partial_foo = functools.partial(foo, 2)
Running help(foo) will result in showing foo.__doc__. However, running help(partial_foo) results in the __doc__ of a Partial object.
My first approach was to use functools.update_wrapper which correctly replaces the partial object's __doc__ with foo.__doc__. However, this doesn't fix the 'problem' because of how pydoc.
I've investigated the pydoc code, and the issue seems to be that partial_foo is actually a Partial object not a typical function/callable, see this question for more information on that detail.
By default, pydoc will display the __doc__ of the object type, not instance if the object it was passed is determined to be a class by inspect.isclass. See the render_doc function for more information about the code itself.
So, in my scenario above pydoc is displaying the help of the type, functools.partial NOT the __doc__ of my functools.partial instance.
Is there anyway to make alter my call to help() or functools.partial instance that's passed to help() so that it will display the __doc__ of the instance, not type?
I found a pretty hacky way to do this. I wrote the following function to override the __builtins__.help function:
def partialhelper(object=None):
if isinstance(object, functools.partial):
return pydoc.help(object.func)
else:
# Preserve the ability to go into interactive help if user calls
# help() with no arguments.
if object is None:
return pydoc.help()
else:
return pydoc.help(object)
Then just replace it in the REPL with:
__builtins__.help = partialhelper
This works and doesn't seem to have any major downsides, yet. However, there isn't a way with the above naive implementation to support still showing the __doc__ of some functools.partial objects. It's all or nothing, but could probably attach an attribute to the wrapped (original) function to indicate whether or not the original __doc__ should be shown. However, in my scenario I never want to do this.
Note the above does NOT work when using IPython and the embed functionality. This is because IPython directly sets the shell's namespace with references to the 'real' __builtin__, see the code and old mailing list for information on why this is.
So, after some investigation there's another way to hack this into IPython. We must override the site._Helper class, which is used by IPython to explicitly setup the help system. The following code will do just that when called BEFORE IPython.embed:
import site
site._Helper.__call__ = lambda self, *args, **kwargs: partialhelper(*args, **kwargs)
Are there any other downsides I'm missing here?
how bout implementing your own?
def partial_foo(*args):
""" some doc string """
return foo(*((2)+args))
not a perfect answer but if you really want this i suspect this is the only way to do it
You identified the issue - partial functions aren't typical functions, and the dunder variables don't carry over. This applies not just to __doc__, but also __name__, __module__, and more. Not sure if this solution existed when the question was asked, but you can achieve this more elegantly ("elegantly" up to interpretation) by re-writing partial() as a decorator factory. Since decorators (& factories) do not automatically copy over dunder variables, you need to also use #wraps(func):
def wrapped_partial(*args, **kwargs):
def foo(func):
#wraps(func)
def bar(*fargs,**fkwargs):
return func(*args, *fargs, **kwargs, **fkwargs)
return bar
return foo
Usage example:
#wrapped_partial(3)
def multiply_triple(x, y=1, z=0):
"""Multiplies three numbers"""
return x * y * z
# Without decorator syntax: multiply_triple = wrapped_partial(3)(multiply_triple)
With output:
>>>print(multiply_triple())
0
>>>print(multiply_triple(3,z=3))
9
>>>help(multiply_triple)
help(multiply_triple)
Help on function multiply_triple in module __main__:
multiply_triple(x: int, y: int = 1, z: int = 0)
Multiplies three numbers
Thing that didn't work, but informative when using multiple decorators
You might think, as I first did, that based upon the stacking syntax of decorators in PEP-318, you could put the wrapping and the partial function definition in separate decorators, e.g.
def partial_func(*args, **kwargs):
def foo(func):
def bar(*fargs,**fkwargs):
return func(*args, *fargs, **kwargs, **fkwargs)
return bar
return foo
def wrapped(f):
#wraps(f)
def wrapper(*args, **kwargs):
return f(*args, **kwargs)
return wrapper
#wrapped
#partial_func(z=3)
def multiply_triple(x, y=1, z=0):
"""Multiplies three numbers"""
return x * y * z
In these cases (and in reverse order), the decorators are applied one at a time, and the #partial_func interrupts wrapping. This means that if you are trying to use any decorator that you want to wrap, you need to rewrite the decorator in a factory where the decorator's return function is itself decorated by #wraps(func). If you are using multiple decorators, they all have to be turned into wrapped factories.
Alternate method to have decorators "wrap"
Since decorators are just functions, you can write a copy_dunder_vars(obj1, obj2) function that retruns obj2 but with all the dunder variables from obj1. Call as:
def foo()
pass
foo = copy_dunder_vars(decorator(foo), foo)
This goes against the preferred syntax, but practicality beats purity. I think "not forcing you to rewrite decorators that you're borrowing from elsewhere and leaving largely unchanged" fits into that category. After all that wrapping, don't forget ribbon and a bow ;)

How can I see if a method is a decorator?

Is it possible to inspect a function/method to see whether it can be used as a decorator? In that it follows the usual way decorators wrap other functions and return a callable? Specifically, I'm looking to validate 3rd party code.
By applying a suspected decorator, catching exceptions, and then testing whether the result contains a __call__ method, you could produce a guess as to whether a given callable is a decorator or not. But it will be only a guess, not a guarantee.
Beyond that, I do not believe what you want will be possible in general, due to the dynamically typed nature of the Python language and to the special treatment of built-in functions in the CPython interpreter. It is not possible to programmatically tell whether a callable will accept another callable as an argument, or what type its return value will have. Also, in CPython, for functions implemented in C, you cannot even inspect a callable to see how many arguments it accepts.
The word "decorator" can be taken to mean different things. One way to define it is, a decorator is any callable that accepts a single (callable) argument and returns a callable.
Note that I have not even used the word "function" in this definition; it would actually be incorrect to do so. Indeed, some commonly used decorators have strange properties:
The built-in classmethod and staticmethod decorators return descriptor objects, not functions.
Since language version 2.6 you can decorate classes, not just functions and methods.
Any class containing an __init__(self, somecallable) method and a __call__(self, *args, **kwargs) method can be used as a decorator.
Since there is no standardized decorator in Python, there's no real way of telling if a function is a decorator unless you know something about the decorator you're looking for.
If the decorator is under your control, you can add a mark to indicate it's a decorated function. Otherwise there is no real unified way of doing this. Take this example for instance:
def decorator(func):
return g
#decorator
def f()
pass
def g():
pass
In the above example, in run-time, f and g will be identical, and there is no way of telling the two apart.
Any callable with the right number of arguments can be used as a decorator. Remember that
#foo
def bar(...):
is exactly the same as
def bar(...):
...
bar = foo(bar)
Naturally, since foo could return anything, you have no way of checking whether a function has been decorated or not. Although foo could be nice and leave a mark, it has no obligation to do so.
If you are given some Python code and you want to find all the things that are decorators, you can do so by parsing the code into an abstract syntax tree then walking the tree looking for decorated functions. Here's an example, storing the .ids of the decorators. Obviously, you could store the astobjects if you wanted to.
>>> class DecoratorFinder(ast.NodeVisitor):
... def __init__(self, *args, **kwargs):
... super(DecoratorFinder, self).__init__(*args, **kwargs)
... self.decorators = set()
...
... def visit_FunctionDef(self, node):
... self.decorators.update(dec.id for dec in node.decorator_list)
... self.generic_visit(node)
...
>>> finder = DecoratorFinder()
>>> x = ast.parse("""
... #dec
... def foo():
... pass
... """)
>>> finder.visit(x)
>>> finder.decorators
set(['dec'])
No this is not possible. May be instead of checking if f is a decorator, you should think why you need to check that?
If you are expecting some specific decorator, you can directly check that, if you want some specific behavior/methods/attributes you can check that
If you want to check if some callable f can be used as decorator, you can test the decorator behavior by passing some dummy function, but in general it may not work or have different behavior for different inputs.
Here is a such naive check:
def decorator1(func):
def _wrapper(*args, **kwargs):
print "before"
func(*args, **kwargs)
print "after"
return _wrapper
def dummy_func(): pass
out_func = decorator1(dummy_func)
if callable(out_func) and dummy_func != out_func:
print "aha decorated!"
I've never done anything like this, but in general python relies on "duck-typing" in situations like this. So you could just try to decorate a dummy function and see if a callable is returned.

Class decorators vs function decorators [duplicate]

This question already has answers here:
Difference between decorator classes and decorator functions
(3 answers)
Closed 7 years ago.
In python there are two ways to declare decorators:
Class based
class mydecorator(object):
def __init__(self, f):
self.f = f
def __call__(self, *k, **kw):
# before f actions
self.f(*k, **kw)
# after f actions
Function based
def mydecorator(f):
def decorator(*k, **kw):
# before f actions
f(*k, **kw)
# after f actions
return decorator
Is there any difference between these declarations?
In which cases each of them should be used?
If you want to keep state in the decorator you should use a class.
For example, this does not work
def mydecorator(f):
x = 0
def decorator():
x += 1 # x is a nonlocal name and cant be modified
return f(x)
return decorator
There are many workarounds for this but the simplest way is to use a class
class mydecorator(object):
def __init__(self, f):
self.f = f
self.x = 0
def __call__(self, *k, **kw):
self.x += 1
return f(self.x)
When you're creating a callable returning another callable, the function approach is easier and cheaper. There are two main differences:
The function approach works automatically with methods, while if you're using your class approach, you'd have to read on descriptors and define a __get__ method.
The class approach makes keeping state easier. You could use a closure, especially in Python 3, but a class approach is generally preferred for keeping state.
Additionally, the function approach allows you to return the original function, after modifying it or storing it.
However, a decorator can return something other than a callable or something more than a callable. With a class, you can:
Add methods and properties to the decorated callable object, or implement operations on them (uh-oh).
Create descriptors that act in a special way when placed in classes (e.g. classmethod, property)
Use inheritance to implement similar but different decorators.
If you have any doubt, ask yourself: Do you want your decorator to return a function that acts exactly like a function should? Use a function returning a function. Do you want your decorator to return a custom object that does something more or something different to what a function does? Create a class and use it as a decorator.
In fact there are no 'two ways'. There is only one way (define a callable object) or as many ways as there are in python to make a callable object (it could be a method of other object, a result of lambda expression, a 'partial' object, anything that is callable).
Function definition is the easiest way to make a callable object and, as the simplest one, is probably the best in most cases. Using a class gives you more possibilities to cleanly code more complicated cases (even in the simplest cases it looks quite elegant), but it is not that obvious what it does.
No, there are (more than) two ways to make callable objects. One is to def a function, which is obviously callable. Another is to define a __call__ method in a class, which will make instances of it callable. And classes themselves are callable objects.
A decorator is nothing more than a callable object, which is intended to accept a function as its sole argument and return something callable. The following syntax:
#decorate
def some_function(...):
...
Is just a slightly nicer way of writing:
def some_function(...):
...
some_function = decorate(some_function)
The class-based example you give isn't a function that takes a function and return a function, which is the bog-standard vanilla decorator, it's a class that is initialised with a function whose instances are callable. To me, this is a little weird if you're not actually using it as a class (Does it have other methods? Does its state change? Do you make several instances of it that have common behaviour encapsulated by the class?). But normal use of your decorated function will not tell the difference (unless it's a particularly invasive decorator), so do whatever feels more natural to you.
Let's just test it!
test_class = """
class mydecorator_class(object):
def __init__(self, f):
self.f = f
def __call__(self, *k, **kw):
# before f actions
print 'hi class'
self.f(*k, **kw)
print 'goodbye class'
# after f actions
#mydecorator_class
def cls():
print 'class'
cls()
"""
test_deco = """
def mydecorator_func(f):
def decorator(*k, **kw):
# before f actions
print 'hi function'
f(*k, **kw)
print 'goodbye function'
# after f actions
return decorator
#mydecorator_func
def fun():
print 'func'
fun()
"""
if __name__ == "__main__":
import timeit
r = timeit.Timer(test_class).timeit(1000)
r2 = timeit.Timer(test_deco).timeit(1000)
print r, r2
I've got results like this : 0.0499339103699 0.0824959278107
This is means that class deco 2 times faster?

How to get all methods of a Python class with given decorator?

How to get all methods of a given class A that are decorated with the #decorator2?
class A():
def method_a(self):
pass
#decorator1
def method_b(self, b):
pass
#decorator2
def method_c(self, t=5):
pass
Method 1: Basic registering decorator
I already answered this question here: Calling functions by array index in Python =)
Method 2: Sourcecode parsing
If you do not have control over the class definition, which is one interpretation of what you'd like to suppose, this is impossible (without code-reading-reflection), since for example the decorator could be a no-op decorator (like in my linked example) that merely returns the function unmodified. (Nevertheless if you allow yourself to wrap/redefine the decorators, see Method 3: Converting decorators to be "self-aware", then you will find an elegant solution)
It is a terrible terrible hack, but you could use the inspect module to read the sourcecode itself, and parse it. This will not work in an interactive interpreter, because the inspect module will refuse to give sourcecode in interactive mode. However, below is a proof of concept.
#!/usr/bin/python3
import inspect
def deco(func):
return func
def deco2():
def wrapper(func):
pass
return wrapper
class Test(object):
#deco
def method(self):
pass
#deco2()
def method2(self):
pass
def methodsWithDecorator(cls, decoratorName):
sourcelines = inspect.getsourcelines(cls)[0]
for i,line in enumerate(sourcelines):
line = line.strip()
if line.split('(')[0].strip() == '#'+decoratorName: # leaving a bit out
nextLine = sourcelines[i+1]
name = nextLine.split('def')[1].split('(')[0].strip()
yield(name)
It works!:
>>> print(list( methodsWithDecorator(Test, 'deco') ))
['method']
Note that one has to pay attention to parsing and the python syntax, e.g. #deco and #deco(... are valid results, but #deco2 should not be returned if we merely ask for 'deco'. We notice that according to the official python syntax at http://docs.python.org/reference/compound_stmts.html decorators are as follows:
decorator ::= "#" dotted_name ["(" [argument_list [","]] ")"] NEWLINE
We breathe a sigh of relief at not having to deal with cases like #(deco). But note that this still doesn't really help you if you have really really complicated decorators, such as #getDecorator(...), e.g.
def getDecorator():
return deco
Thus, this best-that-you-can-do strategy of parsing code cannot detect cases like this. Though if you are using this method, what you're really after is what is written on top of the method in the definition, which in this case is getDecorator.
According to the spec, it is also valid to have #foo1.bar2.baz3(...) as a decorator. You can extend this method to work with that. You might also be able to extend this method to return a <function object ...> rather than the function's name, with lots of effort. This method however is hackish and terrible.
Method 3: Converting decorators to be "self-aware"
If you do not have control over the decorator definition (which is another interpretation of what you'd like), then all these issues go away because you have control over how the decorator is applied. Thus, you can modify the decorator by wrapping it, to create your own decorator, and use that to decorate your functions. Let me say that yet again: you can make a decorator that decorates the decorator you have no control over, "enlightening" it, which in our case makes it do what it was doing before but also append a .decorator metadata property to the callable it returns, allowing you to keep track of "was this function decorated or not? let's check function.decorator!". And then you can iterate over the methods of the class, and just check to see if the decorator has the appropriate .decorator property! =) As demonstrated here:
def makeRegisteringDecorator(foreignDecorator):
"""
Returns a copy of foreignDecorator, which is identical in every
way(*), except also appends a .decorator property to the callable it
spits out.
"""
def newDecorator(func):
# Call to newDecorator(method)
# Exactly like old decorator, but output keeps track of what decorated it
R = foreignDecorator(func) # apply foreignDecorator, like call to foreignDecorator(method) would have done
R.decorator = newDecorator # keep track of decorator
#R.original = func # might as well keep track of everything!
return R
newDecorator.__name__ = foreignDecorator.__name__
newDecorator.__doc__ = foreignDecorator.__doc__
# (*)We can be somewhat "hygienic", but newDecorator still isn't signature-preserving, i.e. you will not be able to get a runtime list of parameters. For that, you need hackish libraries...but in this case, the only argument is func, so it's not a big issue
return newDecorator
Demonstration for #decorator:
deco = makeRegisteringDecorator(deco)
class Test2(object):
#deco
def method(self):
pass
#deco2()
def method2(self):
pass
def methodsWithDecorator(cls, decorator):
"""
Returns all methods in CLS with DECORATOR as the
outermost decorator.
DECORATOR must be a "registering decorator"; one
can make any decorator "registering" via the
makeRegisteringDecorator function.
"""
for maybeDecorated in cls.__dict__.values():
if hasattr(maybeDecorated, 'decorator'):
if maybeDecorated.decorator == decorator:
print(maybeDecorated)
yield maybeDecorated
It works!:
>>> print(list( methodsWithDecorator(Test2, deco) ))
[<function method at 0x7d62f8>]
However, a "registered decorator" must be the outermost decorator, otherwise the .decorator attribute annotation will be lost. For example in a train of
#decoOutermost
#deco
#decoInnermost
def func(): ...
you can only see metadata that decoOutermost exposes, unless we keep references to "more-inner" wrappers.
sidenote: the above method can also build up a .decorator that keeps track of the entire stack of applied decorators and input functions and decorator-factory arguments. =) For example if you consider the commented-out line R.original = func, it is feasible to use a method like this to keep track of all wrapper layers. This is personally what I'd do if I wrote a decorator library, because it allows for deep introspection.
There is also a difference between #foo and #bar(...). While they are both "decorator expressons" as defined in the spec, note that foo is a decorator, while bar(...) returns a dynamically-created decorator, which is then applied. Thus you'd need a separate function makeRegisteringDecoratorFactory, that is somewhat like makeRegisteringDecorator but even MORE META:
def makeRegisteringDecoratorFactory(foreignDecoratorFactory):
def newDecoratorFactory(*args, **kw):
oldGeneratedDecorator = foreignDecoratorFactory(*args, **kw)
def newGeneratedDecorator(func):
modifiedFunc = oldGeneratedDecorator(func)
modifiedFunc.decorator = newDecoratorFactory # keep track of decorator
return modifiedFunc
return newGeneratedDecorator
newDecoratorFactory.__name__ = foreignDecoratorFactory.__name__
newDecoratorFactory.__doc__ = foreignDecoratorFactory.__doc__
return newDecoratorFactory
Demonstration for #decorator(...):
def deco2():
def simpleDeco(func):
return func
return simpleDeco
deco2 = makeRegisteringDecoratorFactory(deco2)
print(deco2.__name__)
# RESULT: 'deco2'
#deco2()
def f():
pass
This generator-factory wrapper also works:
>>> print(f.decorator)
<function deco2 at 0x6a6408>
bonus Let's even try the following with Method #3:
def getDecorator(): # let's do some dispatching!
return deco
class Test3(object):
#getDecorator()
def method(self):
pass
#deco2()
def method2(self):
pass
Result:
>>> print(list( methodsWithDecorator(Test3, deco) ))
[<function method at 0x7d62f8>]
As you can see, unlike method2, #deco is correctly recognized even though it was never explicitly written in the class. Unlike method2, this will also work if the method is added at runtime (manually, via a metaclass, etc.) or inherited.
Be aware that you can also decorate a class, so if you "enlighten" a decorator that is used to both decorate methods and classes, and then write a class within the body of the class you want to analyze, then methodsWithDecorator will return decorated classes as well as decorated methods. One could consider this a feature, but you can easily write logic to ignore those by examining the argument to the decorator, i.e. .original, to achieve the desired semantics.
To expand upon #ninjagecko's excellent answer in Method 2: Source code parsing, you can use the ast module introduced in Python 2.6 to perform self-inspection as long as the inspect module has access to the source code.
def findDecorators(target):
import ast, inspect
res = {}
def visit_FunctionDef(node):
res[node.name] = [ast.dump(e) for e in node.decorator_list]
V = ast.NodeVisitor()
V.visit_FunctionDef = visit_FunctionDef
V.visit(compile(inspect.getsource(target), '?', 'exec', ast.PyCF_ONLY_AST))
return res
I added a slightly more complicated decorated method:
#x.y.decorator2
def method_d(self, t=5): pass
Results:
> findDecorators(A)
{'method_a': [],
'method_b': ["Name(id='decorator1', ctx=Load())"],
'method_c': ["Name(id='decorator2', ctx=Load())"],
'method_d': ["Attribute(value=Attribute(value=Name(id='x', ctx=Load()), attr='y', ctx=Load()), attr='decorator2', ctx=Load())"]}
If you do have control over the decorators, you can use decorator classes rather than functions:
class awesome(object):
def __init__(self, method):
self._method = method
def __call__(self, obj, *args, **kwargs):
return self._method(obj, *args, **kwargs)
#classmethod
def methods(cls, subject):
def g():
for name in dir(subject):
method = getattr(subject, name)
if isinstance(method, awesome):
yield name, method
return {name: method for name,method in g()}
class Robot(object):
#awesome
def think(self):
return 0
#awesome
def walk(self):
return 0
def irritate(self, other):
return 0
and if I call awesome.methods(Robot) it returns
{'think': <mymodule.awesome object at 0x000000000782EAC8>, 'walk': <mymodulel.awesome object at 0x000000000782EB00>}
For those of us who just want the absolute simplest possible case - namely, a single-file solution where we have total control over both the class we're working with and the decorator we're trying to track, I've got an answer. ninjagecko linked to a solution for when you have control over the decorator you want to track, but I personally found it to be complicated and really hard to understand, possibly because I've never worked with decorators until now. So, I've created the following example, with the goal of being as straightforward and simple as possible. It's a decorator, a class with several decorated methods, and code to retrieve+run all methods that have a specific decorator applied to them.
# our decorator
def cool(func, *args, **kwargs):
def decorated_func(*args, **kwargs):
print("cool pre-function decorator tasks here.")
return_value = func(*args, **kwargs)
print("cool post-function decorator tasks here.")
return return_value
# add is_cool property to function so that we can check for its existence later
decorated_func.is_cool = True
return decorated_func
# our class, in which we will use the decorator
class MyClass:
def __init__(self, name):
self.name = name
# this method isn't decorated with the cool decorator, so it won't show up
# when we retrieve all the cool methods
def do_something_boring(self, task):
print(f"{self.name} does {task}")
#cool
# thanks to *args and **kwargs, the decorator properly passes method parameters
def say_catchphrase(self, *args, catchphrase="I'm so cool you could cook an egg on me.", **kwargs):
print(f"{self.name} says \"{catchphrase}\"")
#cool
# the decorator also properly handles methods with return values
def explode(self, *args, **kwargs):
print(f"{self.name} explodes.")
return 4
def get_all_cool_methods(self):
"""Get all methods decorated with the "cool" decorator.
"""
cool_methods = {name: getattr(self, name)
# get all attributes, including methods, properties, and builtins
for name in dir(self)
# but we only want methods
if callable(getattr(self, name))
# and we don't need builtins
and not name.startswith("__")
# and we only want the cool methods
and hasattr(getattr(self, name), "is_cool")
}
return cool_methods
if __name__ == "__main__":
jeff = MyClass(name="Jeff")
cool_methods = jeff.get_all_cool_methods()
for method_name, cool_method in cool_methods.items():
print(f"{method_name}: {cool_method} ...")
# you can call the decorated methods you retrieved, just like normal,
# but you don't need to reference the actual instance to do so
return_value = cool_method()
print(f"return value = {return_value}\n")
Running the above example gives us the following output:
explode: <bound method cool.<locals>.decorated_func of <__main__.MyClass object at 0x00000220B3ACD430>> ...
cool pre-function decorator tasks here.
Jeff explodes.
cool post-function decorator tasks here.
return value = 4
say_catchphrase: <bound method cool.<locals>.decorated_func of <__main__.MyClass object at 0x00000220B3ACD430>> ...
cool pre-function decorator tasks here.
Jeff says "I'm so cool you could cook an egg on me."
cool post-function decorator tasks here.
return value = None
Note that the decorated methods in this example have different types of return values and different signatures, so the practical value of being able to retrieve and run them all is a bit dubious. However, in cases where there are many similar methods, all with the same signature and/or type of return value (like if you're writing a connector to retrieve unnormalized data from one database, normalize it, and insert it into a second, normalized database, and you have a bunch similar methods, e.g. 15 read_and_normalize_table_X methods), being able to retrieve (and run) them all on the fly could be more useful.
Maybe, if the decorators are not too complex (but I don't know if there is a less hacky way).
def decorator1(f):
def new_f():
print "Entering decorator1", f.__name__
f()
new_f.__name__ = f.__name__
return new_f
def decorator2(f):
def new_f():
print "Entering decorator2", f.__name__
f()
new_f.__name__ = f.__name__
return new_f
class A():
def method_a(self):
pass
#decorator1
def method_b(self, b):
pass
#decorator2
def method_c(self, t=5):
pass
print A.method_a.im_func.func_code.co_firstlineno
print A.method_b.im_func.func_code.co_firstlineno
print A.method_c.im_func.func_code.co_firstlineno
I don't want to add much, just a simple variation of ninjagecko's Method 2. It works wonders.
Same code, but using list comprehension instead of a generator, which is what I needed.
def methodsWithDecorator(cls, decoratorName):
sourcelines = inspect.getsourcelines(cls)[0]
return [ sourcelines[i+1].split('def')[1].split('(')[0].strip()
for i, line in enumerate(sourcelines)
if line.split('(')[0].strip() == '#'+decoratorName]
A simple way to solve this problem is to put code in the decorator that adds each function/method, that is passed in, to a data set (for example a list).
e.g.
def deco(foo):
functions.append(foo)
return foo
now every function with the deco decorator will be added to functions.

Categories