Can a method reference itself anonymously? - python

I just wrote a small function that returns its own arguments as a dict:
from inspect import signature
class MyClass:
def MyFunc(self, thing1=0, thing2=0, thing3=0, thing4="", thing5=""):
P = {}
for p in list(signature(self.MyFunc).parameters):
P[p] = eval(p)
return P
Setting aside why anyone would want to do that (and accepting that I've distilled a very simple example out of a broader context to explore a very specific question), there's an explicit reference self.MyFunc there.
I've seen complicated ways of avoiding that like:
globals()[inspect.getframeinfo(inspect.currentframe()).function]
and
globals()[sys._getframe().f_code.co_name]
but I wonder if there's something like the anonymous super() construct Python offers to reference the method of the same name in a parent class, that works for elegantly permitting a function to refer to itself, anonymously, i.e. without having to name itself.
I suspect not, that there is no way to do this as of Python 3.8. But thought this a worthwhile question to table and explore and invite correction of my suspicion on.

No such construct exists. Code in a function has no special way to refer to that function.
Execution of a function doesn't actually involve the function itself, after initial startup. After startup, all that's needed from the function is the code object, and that's the only part the stack frame keeps a reference to. You can't recover the function from just the code object - many functions can share the same code object.

You can do it with a decorator that adds the parameter list to those passed to the method.
The same approach could be extended into a class decorator that did it to some or all of the methods of the class.
Here's an example implementation of the single-method decorator:
from inspect import signature
def add_paramlist(func):
paramlist = list(signature(func).parameters)
try:
paramlist.remove('paramlist')
except ValueError as exc:
raise RuntimeError(f'"paramlist" argument not declareed in signature of '
f'{func.__name__}() method') from exc
def wrapped(*args, **kwargs):
return func(paramlist, *args, **kwargs)
return wrapped
class MyClass:
#add_paramlist
def MyFunc(paramlist, self, thing1=0, thing2=0, thing3=0, thing4="", thing5=""):
P = {}
for p in paramlist:
P[p] = eval(p)
return P
from pprint import pprint
inst = MyClass()
res = inst.MyFunc(thing1=2, thing2=2, thing3=2, thing4="2", thing5="2")
pprint(res)
Output:
{'self': <__main__.MyClass object at 0x00566B38>,
'thing1': 2,
'thing2': 2,
'thing3': 2,
'thing4': '2',
'thing5': '2'}

As user2357112 says,you can't have any hack-less way to get a name of a function from within that function,but if you just want a function to return its arguments as a dict, you can use this:
class MyClass:
def MyFunc(self,**kwargs):
return kwargs
or if you want to use the *args:
class MyClass:
def MyFunc(self,*args,**kwargs):
names=["thing%d"%i for i in range(1,6)]
for v,k in zip(args,names):
if k in kwargs:
raise ValueError
else:
kwargs[k]=v
return kwargs
Using a hack including locals:
class MyClass:
def MyFunc(self, thing1=0, thing2=0, thing3=0, thing4="", thing5=""):
d=locals().copy()
del d["self"]
return d

Related

Monkeypatch with instance method

I'm trying to monkeypatch how pandas Panel's slicing (__getitem__). This is straightforward to do with a basic function, foo.
from pandas import Panel
Panel.__getitem__ = ORIGINAL_getitem
def newgetitem(panel, *args, **kwargs):
""" Append a string to return of panel.__getitem__"""
out = super(Panel, panel).__getitem__(*args, **kwargs)
return out+'custom stuff added'
Panel.__getitem__ = newgetitem
WhereORIGINAL_getitem is storing the original Panel method. I'm trying to extend to the case where foo() is not a function, but an instance method of an object, Foo. For example:
class Foo:
name = 'some name'
def newgetitem(self, panel, *args, **kwargs):
""" Append a string to return of panel.__getitem__,
but take attributes from self, like self.name
"""
out = super(Panel, panel).__getitem__(*args, **kwargs)
return out+'custom stuff added including name' + self.name
Foo.foo() must access the attribute self.name. Therefore, the monkeypatched function would need a reference to the Foo instance somehow, in addition to the Panel. How can I monkepatch panel with Foo.foo() and make self.name accessible?
The switching between the monkey patched function happens in another method, Foo.set_backend()
class Foo:
name = 'some name'
def foo(self):
return 'bar, called by %s' % self.name
def set_backend(self, backend):
""" Swap between new or original slicing."""
if backend != 'pandas':
Panel.__getitem__ = newgetitem
else:
Panel.__getitem__ = ORIGINAL_getitem
What I really need is for newgetitem to maintain a reference to self.
Solution Attempts
So far I've tried taking making newgetitem() a pure function, and using partial functions to pass a reference to self in. This doesn't work. Something like:
import functools
def newgetitem(foo_instance, panel, *args, **kwargs):
....
class Foo:
...
def set_backend(self, backend):
""" Swap between new or original slicing."""
if backend != 'pandas':
partialfcn = functools.partial(newgetitem, self)
Panel.__getitem__ = partialfcn
else:
Panel.__getitem__ = ORIGINAL_getitem
But this doesn't work. A reference to self is passed, but no access from the calling Panel possible. That is:
panel['50']
Passes a reference to Foo, not to Panel.
Yes, I know this is bad practice, but it's just a workaround for the time-being.
You can use patch from mock framework to handle your case. Even it is designed for testing, its primary work is monkey patching in defined contex.
Your set_backend() method could be:
def set_backend(self, backend):
if backend != 'pandas' and self._patched_get_item is None:
self._patched_get_item = patch("pandas.Panel.__getitem__", autospec=True, side_effect=self._getitem)
self._patched_get_item.start()
elif backend == 'pandas' and self._patched_get_item is not None:
self._patched_get_item.stop()
self._patched_get_item = None
That will work either when self._getitem is a method or a reference to a function.
One way to do this is to create a closure (a function with reference to names other than locals or globals). A simple closure:
def g(x):
def f():
"""f has no global or local reference to x, but can refer to the locals of the
context it was created in (also known as nonlocals)."""
return x
return f
func = g(1)
assert func() == 1
I don't have pandas on my system, but it works much the same with a dict.
class MyDict(dict):
pass
d = MyDict(a=1, b=2)
assert d['a'] == 1
class Foo:
name = 'name'
def create_getitem(fooself, cls):
def getitem(self, *args, **kwargs):
out = super(cls, self).__getitem__(*args, **kwargs)
return out, 'custom', fooself.name
# Above references fooself, a name that is not defined locally in the
# function, but as part of the scope the function was created in.
return getitem
MyDict.__getitem__ = Foo().create_getitem(MyDict)
assert d['a'] == (1, 'custom', Foo.name)
print(d['a'])
The basics of monkey patching are straightforward but it can quickly become tricky and subtle, especially if you're aiming at finding a solution that would work for both Python 2 and Python 3.
Furthermore, quickly hacked solutions are usually not very readable/maintenable, unless you manage to wrap the monkey patching logic nicely.
That's why I invite you to have a look at a library that I wrote especially for this purpose. It is named Gorilla and you can find it on GitHub.
In short, it provides a cool set of features, it has a wide range of unit tests, and it comes with a fancy doc that should cover everything you need to get started. Make sure to also check the FAQ!

Magic assign for custom parameters

I want to give user API for my library with easier way to distinguish different types of parameters which I pass to function. All groups of arguments are defined earlier (for now I have 3 groups), but attributes of them need to be constructed on run. I can do this in Django ORM style, where double underscore separates 2 parts of parameter. But it is very unreadable. Example:
def api_function(**kwargs):
""" Separate passed arguments """
api_function(post__arg1='foo', api__arg1='bar', post_arg2='foo2')
Better way do this SQLAlchemy, but only to compare attributes and all args are defined earlier. Example:
class API(object):
arg1 = Arg()
arg2 = Arg()
class Post(object): #...
def api_function(*args):
""" Separate passed arguments """
api_function(POST.arg1=='foo', API.arg1=='bar', POST.arg2=='foo2')
What I would like to achive is behaviour like this:
class API(object): # Magic
class POST(object): # Magic
def api_function(*args):
""" Separate passed arguments """
api_function(POST.arg1='foo', API.arg1='bar', POST.arg2='foo2')
What have I tried:
declare metamodel with defined __setattr__, but it rise on evaluation SyntaxError: keyword can't be an expression
declare __set__, but it is designed for known attributes
My questions are:
Is it even possible in Python to work like in third snippet?
If not, is there any really close solution to look like in third snippet? The best way should use assignment operator API.arg1='foo', the worst API(arg1='foo')
Requirements -- should work at least at Python 2.7. Good to work on Python 3.2.
EDIT1
My first test, which is using equality operator (but it NEVER should be use in this way):
class APIMeta(type):
def __getattr__(cls, item):
return ApiData(item, None)
class API(object):
__metaclass__ = APIMeta
def __init__(self, key, value):
self.key = key
self.value = value
def __str__(self):
return "{0}={1}".format(self.key, self.value)
def __eq__(self, other):
self.value = other
return self
def print_api(*api_data):
for a in api_data:
print(str(a))
print_api(API.page=='3', API=='bar')
It is working right, but using == is suggesting that I want to compare something and I want to assign value.
NOTE: I don't know how much I like this schema you want. But I know one annoying thing will be all the imports to call api_function. E.G. from api import POST, API, api_function
As I said in the comments, the first way is not possible. This is because assignment (=) is a statement not an expression, so it can't return a value. Source
But the other way you asked for certainly is:
class POST(object):
def __init__(self, **kwargs):
self.args = kwargs
# You'll also probably want to make this function a little safer.
def __getattr__(self, name):
return self.args[name]
def api_function(*args):
# Update this to how complicated the handling needs to be
# but you get the general idea...
post_data = None
for a in args:
if isinstance(a, POST):
post_data = a.args
if post_data is None:
raise Exception('This function needs a POST object passed.')
print post_data
Using it:
>>> api_function('foo')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 7, in api_function
Exception: This function needs a POST object passed.
>>> api_function(POST(arg1='foo'))
{'arg1': 'foo'}
>>> api_function(POST(arg1='foo',
... arg2='bar'
... )
... )
{'arg1': 'foo', 'arg2': 'bar'}
Here's my solution. It's not the best in design, as the structure of the argument grouper is nested quite deep, so I'd appreciate feedback on it:
class ArgumentGrouper(object):
"""Transforms a function so that you can apply arguments in named groups.
This system isn't tested as thoroughly as something with so many moving
parts should be. Use at own risk.
Usage:
#ArgumentGrouper("foo", "bar")
def method(regular_arg, foo__arg1, bar__arg2):
print(regular_arg + foo__arg1 + bar__arg2)
method.foo(", ").bar("world!")("Hello")() # Prints "Hello, world!"
"""
def __call__(self, func):
"""Decorate the function."""
return self.Wrapper(func, self.argument_values)
def __init__(self, *argument_groups):
"""Constructor.
argument_groups -- The names of argument groups in the function.
"""
self.argument_values = {i: {} for i in argument_groups}
class Wrapper(object):
"""This is the result of decorating the function. You can call group
names as function to supply their keyword arguments.
"""
def __call__(self, *args):
"""Execute the decorated function by passing any given arguments
and predefined group arguments.
"""
kwargs = {}
for group, values in self.argument_values.items():
for name, value in values.items():
# Add a new argument in the form foo__arg1 to kwargs, as
# per the supplied arguments.
new_name = "{}__{}".format(
group,
name
)
kwargs[new_name] = value
# Invoke the function with the determined arguments.
return self.func(*args, **kwargs)
def __init__(self, func, argument_values):
"""Constructor.
func -- The decorated function.
argument_values -- A dict with the current values for group
arguments. Must be a reference to the actual dict, since each
WrappedMethod uses it.
"""
self.func = func
self.argument_values = argument_values
def __getattr__(self, name):
"""When trying to call `func.foo(arg1="bar")`, provide `foo`. TODO:
This would be better handled at initialization time.
"""
if name in self.argument_values:
return self.WrappedMethod(name, self, self.argument_values)
else:
return self.__dict__[name]
class WrappedMethod(object):
"""For `func.foo(arg1="bar")`, this is `foo`. Pretends to be a
function that takes the keyword arguments to be supplied to the
decorated function.
"""
def __call__(self, **kwargs):
"""`foo` has been called, record the arguments passed."""
for k, v in kwargs.items():
self.argument_values[self.name][k] = v
return self.wrapper
def __init__(self, name, wrapper, argument_values):
"""Constructor.
name -- The name of the argument group. (This is the string
"foo".)
wrapper -- The decorator. We need this so that we can return it
to chain calls.
argument_values -- A dict with the current values for group
arguments. Must be a reference to the actual dict, since
each WrappedMethod uses it.
"""
self.name = name
self.wrapper = wrapper
self.argument_values = argument_values
# Usage:
#ArgumentGrouper("post", "api")
def api_function(regular_arg, post__arg1, post__arg2, api__arg3):
print("Got regular args {}".format(regular_arg))
print("Got API args {}, {}, {}".format(post__arg1, post__arg2, api__arg3))
api_function.post(
arg1="foo", arg2="bar"
).api(
arg3="baz"
)
api_function("foo")
Then, usage:
#ArgumentGrouper("post", "api")
def api_function(regular_arg, post__arg1, post__arg2, api__arg3):
print("Got regular args {}".format(regular_arg))
print("Got API args {}, {}, {}".format(post__arg1, post__arg2, api__arg3))
api_function.post(
arg1="foo", arg2="bar"
).api(
arg3="baz"
)
api_function("foo")
Output:
Got regular args foo
Got API args foo, bar, baz
It should be simple to scrape argument group names by introspection.
You'll notice the argument naming convention is hardcoded into the WrappedMethod, so you'll have to make sure you're okay with that.
You can also invoke it in one statement:
api_function.post(
arg1="foo", arg2="bar"
).api(
arg3="baz"
)("foo")
Or you could add a dedicated run method which would invoke it, which would just take the place of Wrapper.__call__.
Python don't allow to use assignment operator inside any other code, so:
(a=1)
func((a=1))
will rise SyntaxError. This means that it is not possible to use it in this way. Moreover:
func(API.arg1=3)
Will be treated that left side of assignment is argument API.arg1 which is not valid name in Python for variables. Only solution is to make this in SQLAlchemy style:
func({
API.arg1: 'foo',
API.arg2: 'bar',
DATA.arg1: 'foo1',
})
or
func(**{
API.arg1: 'foo',
API.arg2: 'bar',
DATA.arg1: 'foo1',
})
or just only:
func( API(arg1='foo', arg2='bar'), POST(arg1='foo1'), POST(arg2='bar1'))
Thank you for your interest and answers.

Is it possible get a dictionary of passed in parameters similar to kwargs(python)?

I'm currently creating an object like this:
class Obj(object):
def __init__(self,**kwargs):
params = ['val1','val2','val3','val4',...]
for p in params:
setattr(self,p,kwargs.get(p,None))
I'm doing this so I don't have to do this:
class Obj(object):
def __init__(self,val1=None,val2=None,val3=None,val4=None,...):
self.val1=val1
self.val2=val2
self.val3=val3
self.val4=val4
...
My question is, can you do a mix of the two? Where I can define the expected parameters yet still loop the parameters to set the attributes? I like the idea of defining the expected parameters because it is self documenting and other developers don't have to search for what kwargs are used.
I know it seems pretty petty but I'm creating an object from some XML so I'll be passing in many parameters, it just clutters the code and bugs me.
I did google this but couldn't find anything, probably because dictionary and kwargs together point to kwarg examples.
UPDATE: To be more specific, is it possible to get a dictionary of passed in parameters so I don't have to use kwargs at all?
Sudo code:
class Obj(object):
def __init__(self,val1=None,val2=None,val3=None,val4=None,...):
for k,v in dictionary_of_paramters.iteritems():
setattr(self,k,v)
You can use the inspect module:
import inspect
def myargs(val1, val2, val3=None, val4=5):
print inspect.currentframe().f_locals
it shows all the locals available on the current stack frame.
myargs('a','b')
==> {'val3': None, 'val2': 'b', 'val1': 'a', 'val4': 5}
(note: it's not guaranteed to be implemented on all Python interpreters)
edit: i concur that it's not a pretty solution. what i would do is more like:
def _yourargs(*names):
"returns a dict with your named local vars"
alllocs = inspect.stack()[1][0].f_locals
return {n:alllocs[n] for n in names}
def askformine(val1, val2, val3=None, val4=5):
"example to show just those args i'm interested in"
print _yourargs('val1','val2','val3','val4')
class Obj(object):
"example inserting some named args as instance attributes"
def __init__(self, arg1, arg2=4):
self.__dict__.update(_yourargs('arg1','arg2'))
edit2 slightly better:
def pickdict(d,*names):
"picks some values from a dict"
return {n:d[n] for n in names}
class Obj(object):
"example inserting some named args as instance attributes"
def __init__(self, arg1, arg2=4):
self.__dict__.update(pickdict(locals(),'arg1','arg2'))
There is no nice way to get a dictionary of all the arguments to a function. The **kwargs syntax only collects up the extra keyword arguments, not the ones that match explicit parameters in the function definition.
Although you won't be able to get the parameters without using kwargs or the inspect module (see other answers), you can do something like this...
class Obj(object):
def __init__(self, **kwargs):
self.__dict__.update(**kwargs)
Every object has a dictionary that stores all of the attributes, which you can access via self.__dict__. Then you're just using update to set all of the attributes in that object's internal dictionary.
See this question on some discussion of this method.
If you want to obtain the args dict at the very top of your method, before you define any locals, this is as simple as:
class Obj(object):
def __init__(self,val1=None,val2=None,val3=None,val4=None):
kwargs = dict(locals())
To read this dict later on, some introspection magic is required:
class Obj(object):
def __init__(self,val1=None,val2=None,val3=None,val4=None):
# feel free to add more locals
loc = dict(locals())
fun = sys._getframe().f_code
kwargs = {x:loc[x] for x in fun.co_varnames[:fun.co_argcount]}
You can also make the latter reusable by adding a function like this:
def getargs():
f = sys._getframe(1)
return {x:f.f_locals[x] for x in f.f_code.co_varnames[:f.f_code.co_argcount]}
and then:
class Obj(object):
def __init__(self,val1=None,val2=None,val3=None,val4=None):
# feel free to add more locals
kwargs = getargs()
This is cpython-specific, I guess.
Yes you can mix the two.
See below:
def method(a, b=1, *args, **kwargs):
'''some code'''
This is valid. Here:
'a' is a required argument
'b' is a default argument
'args' will have all the non-keyword arguments and
'kwargs' will have all the keyword arguments.
Example:
method(1,2,3,4,5,test=6,test1=7)
This call will have:
a=1
b=2
args=(3,4,5)
kwargs={'test':6,'test1':7}
A kind of an ugly workaround: Inject extra arguments into kwargs and use it where you want to loop over all keyword arguments (PS this is an example usage of the inspect module, but not recommended for production use):
#!/usr/bin/env python
import inspect
def inject_defaults(func):
""" injects '__defaults' key into into kwargs,
so it can be merged with kwargs in the decorated method """
args, varargs, varkwargs, defaults = inspect.getargspec(func)
have_defaults = args[-len(defaults):]
defaults_dict = dict(zip(have_defaults, defaults))
def fun(*args, **kwargs):
kwargs['__defaults'] = defaults_dict
return func(*args, **kwargs)
return fun
#inject_defaults
def f1(a,b,c, x=1, **kwargs):
kwargs.update(kwargs['__defaults'])
del kwargs['__defaults']
for k, v in kwargs.items():
# here, x, y and z will appear
print(k, v)
f1(1, 2, 3, y=3, z=2)
# prints
# ('y', 3)
# ('x', 1)
# ('z', 2)

Python: Bind an Unbound Method?

In Python, is there a way to bind an unbound method without calling it?
I am writing a wxPython program, and for a certain class I decided it'd be nice to group the data of all of my buttons together as a class-level list of tuples, like so:
class MyWidget(wx.Window):
buttons = [("OK", OnOK),
("Cancel", OnCancel)]
# ...
def Setup(self):
for text, handler in MyWidget.buttons:
# This following line is the problem line.
b = wx.Button(parent, label=text).Bind(wx.EVT_BUTTON, handler)
The problem is, since all of the values of handler are unbound methods, my program explodes in a spectacular blaze and I weep.
I was looking around online for a solution to what seems like should be a relatively straightforward, solvable problem. Unfortunately I couldn't find anything. Right now, I'm using functools.partial to work around this, but does anyone know if there's a clean-feeling, healthy, Pythonic way to bind an unbound method to an instance and continue passing it around without calling it?
All functions are also descriptors, so you can bind them by calling their __get__ method:
bound_handler = handler.__get__(self, MyWidget)
Here's R. Hettinger's excellent guide to descriptors.
As a self-contained example pulled from Keith's comment:
def bind(instance, func, as_name=None):
"""
Bind the function *func* to *instance*, with either provided name *as_name*
or the existing name of *func*. The provided *func* should accept the
instance as the first argument, i.e. "self".
"""
if as_name is None:
as_name = func.__name__
bound_method = func.__get__(instance, instance.__class__)
setattr(instance, as_name, bound_method)
return bound_method
class Thing:
def __init__(self, val):
self.val = val
something = Thing(21)
def double(self):
return 2 * self.val
bind(something, double)
something.double() # returns 42
This can be done cleanly with types.MethodType. Example:
import types
def f(self):
print(self)
class C:
pass
meth = types.MethodType(f, C(), C) # Bind f to an instance of C
print(meth) # prints <bound method C.f of <__main__.C object at 0x01255E90>>
Creating a closure with self in it will not technically bind the function, but it is an alternative way of solving the same (or very similar) underlying problem. Here's a trivial example:
self.method = (lambda self: lambda args: self.do(args))(self)
This will bind self to handler:
bound_handler = lambda *args, **kwargs: handler(self, *args, **kwargs)
This works by passing self as the first argument to the function. object.function() is just syntactic sugar for function(object).
Late to the party, but I came here with a similar question: I have a class method and an instance, and want to apply the instance to the method.
At the risk of oversimplifying the OP's question, I ended up doing something less mysterious that may be useful to others who arrive here (caveat: I'm working in Python 3 -- YMMV).
Consider this simple class:
class Foo(object):
def __init__(self, value):
self._value = value
def value(self):
return self._value
def set_value(self, value):
self._value = value
Here's what you can do with it:
>>> meth = Foo.set_value # the method
>>> a = Foo(12) # a is an instance with value 12
>>> meth(a, 33) # apply instance and method
>>> a.value() # voila - the method was called
33

Why Python decorators rather than closures?

I still haven't got my head around decorators in Python.
I've already started using a lot of closures to do things like customize functions and classes in my coding.
Eg.
class Node :
def __init__(self,val,children) :
self.val = val
self.children = children
def makeRunner(f) :
def run(node) :
f(node)
for x in node.children :
run(x)
return run
tree=Node(1,[Node(2,[]),Node(3,[Node(4,[]),Node(5,[])])])
def pp(n) : print "%s," % n.val
printTree = makeRunner(pp)
printTree(tree)
As far as I can see, decorators are just a different syntax for doing something similar.
Instead of
def pp(n) : print "%s," % n.val
printTree = makeRunner(pp)
I would write :
#makeRunner
def printTree(n) : print "%s," % n.val
Is this all there is to decorators? Or is there a fundamental difference that I've missed?
While it is true that syntactically, decorators are just "sugar", that is not the best way to think about them.
Decorators allow you to weave functionality into your existing code without actually modifying it. And they allow you to do it in a way that is declarative.
This allows you to use decorators to do aspect-oriented programming (AOP). So you want to use a decorator when you have a cross-cutting concern that you want to encapsulate in one place.
The quintessential example would probably be logging, where you want to log the entry or exit of a function, or both. Using a decorator is equivalent to applying advice (log this!) to a joinpoint (during method entry or exit).
Method decoration is a concept like OOP or list comprehensions. As you point out, it is not always appropriate, and can be overused. But in the right place, it can be useful for making code more modular and decoupled.
Are your examples real code, or just examples?
If they're real code, I think you overuse decorators, probably because of your background (i.e. you are used to other programming languages)
Stage 1: avoiding decorators
def run(rootnode, func):
def _run(node): # recursive internal function
func(node)
for x in node.children:
_run(x) # recurse
_run(rootnode) # initial run
This run method obsoletes makeRunner. Your example turns to:
def pp(n): print "%s," % n.val
run(tree, pp)
However, this ignores completely generators, so…
Stage 2: using generators
class Node :
def __init__(self,val,children) :
self.val = val
self.children = children
def __iter__(self): # recursive
yield self
for child in self.children:
for item in child: # recurse
yield item
def run(rootnode, func):
for node in rootnode:
func(node)
Your example remains
def pp(n): print "%s," % n.val
run(tree, pp)
Note that the special method __iter__ allows us to use the for node in rootnode: construct. If you don't like it, just rename the __iter__ method to e.g. walker, and change the run loop into: for node in rootnode.walker():
Obviously, the run function could be a method of class Node instead.
As you see, I suggest you use directly run(tree, func) instead of binding them to the name printTree, but you can use them in a decorator, or you can make use of the functools.partial function:
printTree= functools.partial(run, func=pp)
and from then on, you would just
printTree(tree)
Decorators, in the general sense, are functions or classes that wrap around another object, that extend, or decorate the object. The decorator supports the same interface as the wrapped function or object, so the receiver doesn't even know the object has been decorated.
A closure is an anonymous function that refers to its parameters or other variables outside its scope.
So basically, decorators uses closures, and not replace them.
def increment(x):
return x + 1
def double_increment(func):
def wrapper(x):
print 'decorator executed'
r = func(x) # --> func is saved in __closure__
y = r * 2
return r, y
return wrapper
#double_increment
def increment(x):
return x + 1
>>> increment(2)
decorator executed
(3, 6)
>>> increment.__closure__
(<cell at 0x02C7DC50: function object at 0x02C85DB0>,)
>>> increment.__closure__[0].cell_contents
<function increment at 0x02C85DB0>
So the decorator saves the original function with closure.
Following up Dutch Master's AOP reference, you'll find that using decorators becomes especially useful when you start adding parameters to modify the behaviour of the decorated function/method, and reading that above the function definition is so much easier.
In one project I recall, we needed to supervise tons of celery tasks and so we came up with the idea of using a decorator to plug-and-tweak as required, which was something like:
class tracked_with(object):
"""
Method decorator used to track the results of celery tasks.
"""
def __init__(self, model, unique=False, id_attr='results_id',
log_error=False, raise_error=False):
self.model = model
self.unique = unique
self.id_attr = id_attr
self.log_error = log_error
self.raise_error = raise_error
def __call__(self, fn):
def wrapped(*args, **kwargs):
# Unique passed by parameter has priority above the decorator def
unique = kwargs.get('unique', None)
if unique is not None:
self.unique = unique
if self.unique:
caller = args[0]
pending = self.model.objects.filter(
state=self.model.Running,
task_type=caller.__class__.__name__
)
if pending.exists():
raise AssertionError('Another {} task is already running'
''.format(caller.__class__.__name__))
results_id = kwargs.get(self.id_attr)
try:
result = fn(*args, **kwargs)
except Retry:
# Retry must always be raised to retry a task
raise
except Exception as e:
# Error, update stats, log/raise/return depending on values
if results_id:
self.model.update_stats(results_id, error=e)
if self.log_error:
logger.error(e)
if self.raise_error:
raise
else:
return e
else:
# No error, save results in refresh object and return
if results_id:
self.model.update_stats(results_id, **result)
return result
return wrapped
Then we simply decorated the run method on the tasks with the params required for each case, like:
class SomeTask(Task):
#tracked_with(RefreshResults, unique=True, log_error=False)
def run(self, *args, **kwargs)...
Then changing the behaviour of the task (or removing the tracking altogether) meant tweaking one param, or commenting out the decorated line. Super easy to implement, but more importantly, super easy to understand on inspection.

Categories