Why Python decorators rather than closures? - python

I still haven't got my head around decorators in Python.
I've already started using a lot of closures to do things like customize functions and classes in my coding.
Eg.
class Node :
def __init__(self,val,children) :
self.val = val
self.children = children
def makeRunner(f) :
def run(node) :
f(node)
for x in node.children :
run(x)
return run
tree=Node(1,[Node(2,[]),Node(3,[Node(4,[]),Node(5,[])])])
def pp(n) : print "%s," % n.val
printTree = makeRunner(pp)
printTree(tree)
As far as I can see, decorators are just a different syntax for doing something similar.
Instead of
def pp(n) : print "%s," % n.val
printTree = makeRunner(pp)
I would write :
#makeRunner
def printTree(n) : print "%s," % n.val
Is this all there is to decorators? Or is there a fundamental difference that I've missed?

While it is true that syntactically, decorators are just "sugar", that is not the best way to think about them.
Decorators allow you to weave functionality into your existing code without actually modifying it. And they allow you to do it in a way that is declarative.
This allows you to use decorators to do aspect-oriented programming (AOP). So you want to use a decorator when you have a cross-cutting concern that you want to encapsulate in one place.
The quintessential example would probably be logging, where you want to log the entry or exit of a function, or both. Using a decorator is equivalent to applying advice (log this!) to a joinpoint (during method entry or exit).
Method decoration is a concept like OOP or list comprehensions. As you point out, it is not always appropriate, and can be overused. But in the right place, it can be useful for making code more modular and decoupled.

Are your examples real code, or just examples?
If they're real code, I think you overuse decorators, probably because of your background (i.e. you are used to other programming languages)
Stage 1: avoiding decorators
def run(rootnode, func):
def _run(node): # recursive internal function
func(node)
for x in node.children:
_run(x) # recurse
_run(rootnode) # initial run
This run method obsoletes makeRunner. Your example turns to:
def pp(n): print "%s," % n.val
run(tree, pp)
However, this ignores completely generators, so…
Stage 2: using generators
class Node :
def __init__(self,val,children) :
self.val = val
self.children = children
def __iter__(self): # recursive
yield self
for child in self.children:
for item in child: # recurse
yield item
def run(rootnode, func):
for node in rootnode:
func(node)
Your example remains
def pp(n): print "%s," % n.val
run(tree, pp)
Note that the special method __iter__ allows us to use the for node in rootnode: construct. If you don't like it, just rename the __iter__ method to e.g. walker, and change the run loop into: for node in rootnode.walker():
Obviously, the run function could be a method of class Node instead.
As you see, I suggest you use directly run(tree, func) instead of binding them to the name printTree, but you can use them in a decorator, or you can make use of the functools.partial function:
printTree= functools.partial(run, func=pp)
and from then on, you would just
printTree(tree)

Decorators, in the general sense, are functions or classes that wrap around another object, that extend, or decorate the object. The decorator supports the same interface as the wrapped function or object, so the receiver doesn't even know the object has been decorated.
A closure is an anonymous function that refers to its parameters or other variables outside its scope.
So basically, decorators uses closures, and not replace them.
def increment(x):
return x + 1
def double_increment(func):
def wrapper(x):
print 'decorator executed'
r = func(x) # --> func is saved in __closure__
y = r * 2
return r, y
return wrapper
#double_increment
def increment(x):
return x + 1
>>> increment(2)
decorator executed
(3, 6)
>>> increment.__closure__
(<cell at 0x02C7DC50: function object at 0x02C85DB0>,)
>>> increment.__closure__[0].cell_contents
<function increment at 0x02C85DB0>
So the decorator saves the original function with closure.

Following up Dutch Master's AOP reference, you'll find that using decorators becomes especially useful when you start adding parameters to modify the behaviour of the decorated function/method, and reading that above the function definition is so much easier.
In one project I recall, we needed to supervise tons of celery tasks and so we came up with the idea of using a decorator to plug-and-tweak as required, which was something like:
class tracked_with(object):
"""
Method decorator used to track the results of celery tasks.
"""
def __init__(self, model, unique=False, id_attr='results_id',
log_error=False, raise_error=False):
self.model = model
self.unique = unique
self.id_attr = id_attr
self.log_error = log_error
self.raise_error = raise_error
def __call__(self, fn):
def wrapped(*args, **kwargs):
# Unique passed by parameter has priority above the decorator def
unique = kwargs.get('unique', None)
if unique is not None:
self.unique = unique
if self.unique:
caller = args[0]
pending = self.model.objects.filter(
state=self.model.Running,
task_type=caller.__class__.__name__
)
if pending.exists():
raise AssertionError('Another {} task is already running'
''.format(caller.__class__.__name__))
results_id = kwargs.get(self.id_attr)
try:
result = fn(*args, **kwargs)
except Retry:
# Retry must always be raised to retry a task
raise
except Exception as e:
# Error, update stats, log/raise/return depending on values
if results_id:
self.model.update_stats(results_id, error=e)
if self.log_error:
logger.error(e)
if self.raise_error:
raise
else:
return e
else:
# No error, save results in refresh object and return
if results_id:
self.model.update_stats(results_id, **result)
return result
return wrapped
Then we simply decorated the run method on the tasks with the params required for each case, like:
class SomeTask(Task):
#tracked_with(RefreshResults, unique=True, log_error=False)
def run(self, *args, **kwargs)...
Then changing the behaviour of the task (or removing the tracking altogether) meant tweaking one param, or commenting out the decorated line. Super easy to implement, but more importantly, super easy to understand on inspection.

Related

Can a method reference itself anonymously?

I just wrote a small function that returns its own arguments as a dict:
from inspect import signature
class MyClass:
def MyFunc(self, thing1=0, thing2=0, thing3=0, thing4="", thing5=""):
P = {}
for p in list(signature(self.MyFunc).parameters):
P[p] = eval(p)
return P
Setting aside why anyone would want to do that (and accepting that I've distilled a very simple example out of a broader context to explore a very specific question), there's an explicit reference self.MyFunc there.
I've seen complicated ways of avoiding that like:
globals()[inspect.getframeinfo(inspect.currentframe()).function]
and
globals()[sys._getframe().f_code.co_name]
but I wonder if there's something like the anonymous super() construct Python offers to reference the method of the same name in a parent class, that works for elegantly permitting a function to refer to itself, anonymously, i.e. without having to name itself.
I suspect not, that there is no way to do this as of Python 3.8. But thought this a worthwhile question to table and explore and invite correction of my suspicion on.
No such construct exists. Code in a function has no special way to refer to that function.
Execution of a function doesn't actually involve the function itself, after initial startup. After startup, all that's needed from the function is the code object, and that's the only part the stack frame keeps a reference to. You can't recover the function from just the code object - many functions can share the same code object.
You can do it with a decorator that adds the parameter list to those passed to the method.
The same approach could be extended into a class decorator that did it to some or all of the methods of the class.
Here's an example implementation of the single-method decorator:
from inspect import signature
def add_paramlist(func):
paramlist = list(signature(func).parameters)
try:
paramlist.remove('paramlist')
except ValueError as exc:
raise RuntimeError(f'"paramlist" argument not declareed in signature of '
f'{func.__name__}() method') from exc
def wrapped(*args, **kwargs):
return func(paramlist, *args, **kwargs)
return wrapped
class MyClass:
#add_paramlist
def MyFunc(paramlist, self, thing1=0, thing2=0, thing3=0, thing4="", thing5=""):
P = {}
for p in paramlist:
P[p] = eval(p)
return P
from pprint import pprint
inst = MyClass()
res = inst.MyFunc(thing1=2, thing2=2, thing3=2, thing4="2", thing5="2")
pprint(res)
Output:
{'self': <__main__.MyClass object at 0x00566B38>,
'thing1': 2,
'thing2': 2,
'thing3': 2,
'thing4': '2',
'thing5': '2'}
As user2357112 says,you can't have any hack-less way to get a name of a function from within that function,but if you just want a function to return its arguments as a dict, you can use this:
class MyClass:
def MyFunc(self,**kwargs):
return kwargs
or if you want to use the *args:
class MyClass:
def MyFunc(self,*args,**kwargs):
names=["thing%d"%i for i in range(1,6)]
for v,k in zip(args,names):
if k in kwargs:
raise ValueError
else:
kwargs[k]=v
return kwargs
Using a hack including locals:
class MyClass:
def MyFunc(self, thing1=0, thing2=0, thing3=0, thing4="", thing5=""):
d=locals().copy()
del d["self"]
return d

Count calls of a method that may or may not be called inside a decorator

I have this class:
class SomeClass(object):
def __init__(self):
self.cache = {}
def check_cache(method):
def wrapper(self):
if method.__name__ in self.cache:
print('Got it from the cache!')
return self.cache[method.__name__]
print('Got it from the api!')
self.cache[method.__name__] = method(self)
return self.cache[method.__name__]
return wrapper
#check_cache
def expensive_operation(self):
return get_data_from_api()
def get_data_from_api():
"This would call the api."
return 'lots of data'
The idea is that I can use the #check_cache decorator to keep the expensive_operation method from calling an api additional times if the result is already cached.
This works fine, it seems.
>>> sc.expensive_operation()
Got it from the api!
'lots of data'
>>> sc.expensive_operation()
Got it from the cache!
'lots of data'
But I would love to be able to test it with another decorator:
import unittest
class SomeClassTester(SomeClass):
def counted(f):
def wrapped(self, *args, **kwargs):
wrapped.calls += 1
return f(self, *args, **kwargs)
wrapped.calls = 0
return wrapped
#counted
def expensive_operation(self):
return super().expensive_operation()
class TestSomeClass(unittest.TestCase):
def test_api_is_only_called_once(self):
sc = SomeClassTester()
sc.expensive_operation()
self.assertEqual(sc.expensive_operation.calls, 1) # is 1
sc.expensive_operation()
self.assertEqual(sc.expensive_operation.calls, 1) # but this goes to 2
unittest.main()
The problem is that the counted decorator counts the number of times the wrapper function is called, not this inner function.
How do I count that from SomeClassTester?
There's no easy way to do this. Your current test applies the decorators in the wrong order. You want check_cache(counted(expensive_operation)), but you're getting the counted decorator on the outside instead: counted(check_cache(expensive_operation)).
There's no easy way to fix this within the counted decorator, because by the time it gets called, the original function is already wrapped up by the check_cache decorator, and there's no easy way to change the wrapper (it holds its reference to the original function in a closure cell, which is read-only from the outside).
One possible way to make it work is to rebuild the whole method with the decorators in the desired order. You can get a reference to the original method from the closure cell:
class SomeClassTester(SomeClass):
def counted(f):
def wrapped(self, *args, **kwargs):
wrapped.calls += 1
return f(self, *args, **kwargs)
wrapped.calls = 0
return wrapped
expensive_operation = SomeClass.check_cache(
counted(SomeClass.expensive_operation.__closure__[0].cell_value)
)
This is of course far from ideal, since you need to know exactly what decorators are being applied on the method in SomeClass in order to apply them again properly. You also need to know the internals of those decorators so that you can get the right closure cell (the [0] index may not be correct if the other decorator gets changed to differently).
Another (perhaps better) approach might be to change SomeClass in such a way that you can inject your counting code in between the changed method and the expensive bit you want to count. For example, you could have the real expensive part be in _expensive_method_implementation, while the decorated expensive_method is just a simple wrapper that calls it. The test class can override the _implementation method with its own decorated version (which might even skip the actually expensive part and just return dummy data). It doesn't need to override the regular method or mess with its decorators.
It is impossible to do this, without modifying the base class to provide hooks or changing the whole decorated function in derived class based on internal knowledge of base class. Though there is a third way based on internal working of cache decorator, basically change your cache dict so that it counts
class CounterDict(dict):
def __init__(self, *args):
super().__init__(*args)
self.count = {}
def __setitem__(self, key, value):
try:
self.count[key] += 1
except KeyError:
self.count[key] = 1
return super().__setitem__(key, value)
class SomeClassTester(SomeClass):
def __init__(self):
self.cache = CounterDict()
class TestSomeClass(unittest.TestCase):
def test_api_is_only_called_once(self):
sc = SomeClassTester()
sc.expensive_operation()
self.assertEqual(sc.cache.count['expensive_operation'], 1) # is 1
sc.expensive_operation()
self.assertEqual(sc.cache.count['expensive_operation'], 1) # is 1

Monkeypatch with instance method

I'm trying to monkeypatch how pandas Panel's slicing (__getitem__). This is straightforward to do with a basic function, foo.
from pandas import Panel
Panel.__getitem__ = ORIGINAL_getitem
def newgetitem(panel, *args, **kwargs):
""" Append a string to return of panel.__getitem__"""
out = super(Panel, panel).__getitem__(*args, **kwargs)
return out+'custom stuff added'
Panel.__getitem__ = newgetitem
WhereORIGINAL_getitem is storing the original Panel method. I'm trying to extend to the case where foo() is not a function, but an instance method of an object, Foo. For example:
class Foo:
name = 'some name'
def newgetitem(self, panel, *args, **kwargs):
""" Append a string to return of panel.__getitem__,
but take attributes from self, like self.name
"""
out = super(Panel, panel).__getitem__(*args, **kwargs)
return out+'custom stuff added including name' + self.name
Foo.foo() must access the attribute self.name. Therefore, the monkeypatched function would need a reference to the Foo instance somehow, in addition to the Panel. How can I monkepatch panel with Foo.foo() and make self.name accessible?
The switching between the monkey patched function happens in another method, Foo.set_backend()
class Foo:
name = 'some name'
def foo(self):
return 'bar, called by %s' % self.name
def set_backend(self, backend):
""" Swap between new or original slicing."""
if backend != 'pandas':
Panel.__getitem__ = newgetitem
else:
Panel.__getitem__ = ORIGINAL_getitem
What I really need is for newgetitem to maintain a reference to self.
Solution Attempts
So far I've tried taking making newgetitem() a pure function, and using partial functions to pass a reference to self in. This doesn't work. Something like:
import functools
def newgetitem(foo_instance, panel, *args, **kwargs):
....
class Foo:
...
def set_backend(self, backend):
""" Swap between new or original slicing."""
if backend != 'pandas':
partialfcn = functools.partial(newgetitem, self)
Panel.__getitem__ = partialfcn
else:
Panel.__getitem__ = ORIGINAL_getitem
But this doesn't work. A reference to self is passed, but no access from the calling Panel possible. That is:
panel['50']
Passes a reference to Foo, not to Panel.
Yes, I know this is bad practice, but it's just a workaround for the time-being.
You can use patch from mock framework to handle your case. Even it is designed for testing, its primary work is monkey patching in defined contex.
Your set_backend() method could be:
def set_backend(self, backend):
if backend != 'pandas' and self._patched_get_item is None:
self._patched_get_item = patch("pandas.Panel.__getitem__", autospec=True, side_effect=self._getitem)
self._patched_get_item.start()
elif backend == 'pandas' and self._patched_get_item is not None:
self._patched_get_item.stop()
self._patched_get_item = None
That will work either when self._getitem is a method or a reference to a function.
One way to do this is to create a closure (a function with reference to names other than locals or globals). A simple closure:
def g(x):
def f():
"""f has no global or local reference to x, but can refer to the locals of the
context it was created in (also known as nonlocals)."""
return x
return f
func = g(1)
assert func() == 1
I don't have pandas on my system, but it works much the same with a dict.
class MyDict(dict):
pass
d = MyDict(a=1, b=2)
assert d['a'] == 1
class Foo:
name = 'name'
def create_getitem(fooself, cls):
def getitem(self, *args, **kwargs):
out = super(cls, self).__getitem__(*args, **kwargs)
return out, 'custom', fooself.name
# Above references fooself, a name that is not defined locally in the
# function, but as part of the scope the function was created in.
return getitem
MyDict.__getitem__ = Foo().create_getitem(MyDict)
assert d['a'] == (1, 'custom', Foo.name)
print(d['a'])
The basics of monkey patching are straightforward but it can quickly become tricky and subtle, especially if you're aiming at finding a solution that would work for both Python 2 and Python 3.
Furthermore, quickly hacked solutions are usually not very readable/maintenable, unless you manage to wrap the monkey patching logic nicely.
That's why I invite you to have a look at a library that I wrote especially for this purpose. It is named Gorilla and you can find it on GitHub.
In short, it provides a cool set of features, it has a wide range of unit tests, and it comes with a fancy doc that should cover everything you need to get started. Make sure to also check the FAQ!

Subclassing method decorators in python

I am having trouble thinking of a way that's good python and consistent with oop principles as I've been taught to figure out how to create a family of related method decorators in python.
The mutually inconsistent goals seem to be that I want to be able to access both decorator attributes AND attributes of the instance on which the decorated method is bound. Here's what I mean:
from functools import wraps
class AbstractDecorator(object):
"""
This seems like the more natural way, but won't work
because the instance to which the wrapped function
is attached will never be in scope.
"""
def __new__(cls,f,*args,**kwargs):
return wraps(f)(object.__new__(cls,*args,**kwargs))
def __init__(decorator_self, f):
decorator_self.f = f
decorator_self.punctuation = "..."
def __call__(decorator_self, *args, **kwargs):
decorator_self.very_important_prep()
return decorator_self.f(decorator_self, *args, **kwargs)
class SillyDecorator(AbstractDecorator):
def very_important_prep(decorator_self):
print "My apartment was infested with koalas%s"%(decorator_self.punctuation)
class UsefulObject(object):
def __init__(useful_object_self, noun):
useful_object_self.noun = noun
#SillyDecorator
def red(useful_object_self):
print "red %s"%(useful_object_self.noun)
if __name__ == "__main__":
u = UsefulObject("balloons")
u.red()
which of course produces
My apartment was infested with koalas...
AttributeError: 'SillyDecorator' object has no attribute 'noun'
Note that of course there is always a way to get this to work. A factory with enough arguments, for example, will let me attach methods to some created instance of SillyDecorator, but I was kind of wondering whether there is a reasonable way to do this with inheritance.
#miku got the key idea of using the descriptor protocol. Here is a refinement that keeps the decorator object separate from the "useful object" -- it doesn't store the decorator info on the underlying object.
class AbstractDecorator(object):
"""
This seems like the more natural way, but won't work
because the instance to which the wrapped function
is attached will never be in scope.
"""
def __new__(cls,f,*args,**kwargs):
return wraps(f)(object.__new__(cls,*args,**kwargs))
def __init__(decorator_self, f):
decorator_self.f = f
decorator_self.punctuation = "..."
def __call__(decorator_self, obj_self, *args, **kwargs):
decorator_self.very_important_prep()
return decorator_self.f(obj_self, *args, **kwargs)
def __get__(decorator_self, obj_self, objtype):
return functools.partial(decorator_self.__call__, obj_self)
class SillyDecorator(AbstractDecorator):
def very_important_prep(decorator_self):
print "My apartment was infested with koalas%s"%(decorator_self.punctuation)
class UsefulObject(object):
def __init__(useful_object_self, noun):
useful_object_self.noun = noun
#SillyDecorator
def red(useful_object_self):
print "red %s"%(useful_object_self.noun)
>>> u = UsefulObject("balloons")
... u.red()
My apartment was infested with koalas...
red balloons
The descriptor protocol is the key here, since it is the thing that gives you access to both the decorated method and the object on which it is bound. Inside __get__, you can extract the useful object identity (obj_self) and pass it on to the __call__ method.
Note that it's important to use functools.partial (or some such mechanism) rather than simply storing obj_self as an attribute of decorator_self. Since the decorated method is on the class, only one instance of SillyDecorator exists. You can't use this SillyDecorator instance to store useful-object-instance-specific information --- that would lead to strange errors if you created multiple UsefulObjects and accessed their decorated methods without immediately calling them.
It's worth pointing out, though, that there may be an easier way. In your example, you're only storing a small amount of information in the decorator, and you don't need to change it later. If that's the case, it might be simpler to just use a decorator-maker function: a function that takes an argument (or arguments) and returns a decorator, whose behavior can then depend on those arguments. Here's an example:
def decoMaker(msg):
def deco(func):
#wraps(func)
def wrapper(*args, **kwargs):
print msg
return func(*args, **kwargs)
return wrapper
return deco
class UsefulObject(object):
def __init__(useful_object_self, noun):
useful_object_self.noun = noun
#decoMaker('koalas...')
def red(useful_object_self):
print "red %s"%(useful_object_self.noun)
>>> u = UsefulObject("balloons")
... u.red()
koalas...
red balloons
You can use the decoMaker ahead of time to make a decorator to reuse later, if you don't want to retype the message every time you make the decorator:
sillyDecorator = decoMaker("Some really long message about koalas that you don't want to type over and over")
class UsefulObject(object):
def __init__(useful_object_self, noun):
useful_object_self.noun = noun
#sillyDecorator
def red(useful_object_self):
print "red %s"%(useful_object_self.noun)
>>> u = UsefulObject("balloons")
... u.red()
Some really long message about koalas that you don't want to type over and over
red balloons
You can see that this is much less verbose than writing a whole class inheritance tree for different kinds of decoratorts. Unless you're writing super-complicated decorators that store all sorts of internal state (which is likely to get confusing anyway), this decorator-maker approach might be an easier way to go.
Adapted from http://metapython.blogspot.de/2010/11/python-instance-methods-how-are-they.html. Note that this variant sets attributes on the target instance, hence, without checks, it is possible to overwrite target instance attributes. The code below does not contain any checks for this case.
Also note that this example sets the punctuation attribute explicitly; a more general class could auto-discover it's attributes.
from types import MethodType
class AbstractDecorator(object):
"""Designed to work as function or method decorator """
def __init__(self, function):
self.func = function
self.punctuation = '...'
def __call__(self, *args, **kw):
self.setup()
return self.func(*args, **kw)
def __get__(self, instance, owner):
# TODO: protect against 'overwrites'
setattr(instance, 'punctuation', self.punctuation)
return MethodType(self, instance, owner)
class SillyDecorator(AbstractDecorator):
def setup(self):
print('[setup] silly init %s' % self.punctuation)
class UsefulObject(object):
def __init__(self, noun='cat'):
self.noun = noun
#SillyDecorator
def d(self):
print('Hello %s %s' % (self.noun, self.punctuation))
obj = UsefulObject()
obj.d()
# [setup] silly init ...
# Hello cat ...

How is lazy evaluation implemented (in ORMs for example)

Im curious to know how lazy evaluation is implemented at higher levels, ie in libraries, etc. For example, how does the Django ORM or ActiveRecord defer evaluation of query until it is actually used?
Let's have a look at some methods for django's django.db.models.query.QuerySet class:
class QuerySet(object):
"""
Represents a lazy database lookup for a set of objects.
"""
def __init__(self, model=None, query=None, using=None):
...
self._result_cache = None
...
def __len__(self):
if self._result_cache is None:
...
elif self._iter:
...
return len(self._result_cache)
def __iter__(self):
if self._result_cache is None:
...
if self._iter:
...
return iter(self._result_cache)
def __nonzero__(self):
if self._result_cache is not None:
...
def __contains__(self, val):
if self._result_cache is not None:
...
else:
...
...
def __getitem__(self, k):
...
if self._result_cache is not None:
...
...
The pattern that these methods follow is that no queries are executed until some method that really needs to return some result is called. At that point, the result is stored in self._result_cache and any subsequent call to the same method returns the cached value.
In Python, one object may "exist" - but its intrinsic value will only be known by the outer world at the moment it is used with one of the operators - since the operators are defined in the class by the magic names with double underscores, if a class writes the appropriate code to execute the deferred code when the operator is called, it is just fine.
That means, if the object's value is, for example, to be used like a string, any part of the program that will use the object will call, at some point, the "__str__" coercion method.
For example, let's create an object that behaves like a string, but tells the current time. Strings can be concatenated to other strings(__add__), can have their length requested (__len__), and so on. If we want it to fit perfectly in the place of a string, we'd have to override all methods. The idea is to retrieve the actual value just when one of the operators is called - otherwise, the actual object can freely be assigned to variables, and passed around. It will only be evaluated when its value is needed
Then, one can have some code like this:
class timestr(object):
def __init__(self):
self.value = None
def __str__(self):
self._getvalue()
return self.value
def __len__(self):
self._getvalue()
return len(self.value)
def __add__(self, other):
self._getvalue()
return self.value + other
def _getvalue(self):
timet = time.localtime()
self.value = " %s:%s:%s " % (timet.tm_hour, timet.tm_min, timet.tm_sec)
And using it on the console, you may have:
>>> a = timestr()
>>> b = timestr()
>>> print b
17:16:22
>>> print a
17:16:25
If the value for which you want a lazy evaluation is an attribute of your object (like Peson.name ) instead of what your object actually behaves like - it is even easier. Because Python allows all object attributes to be of a special type - called a descriptor -- which actually has a method called each time the attribute will be accessed. Therefore, one just has to create a class with a proper method named __get__ to fetch the actual value. This method will be called only when the attribute is needed.
Python even has an utility for easy descriptor creation - the "property" keyword, that makes this even easier - you pass a method that is the code to generate the attribute as the first parameter to property.
So, having an Event class with a lazy (and live) evaluated time, is just a matter of writting:
import time
class Event(object):
#property
def time(self):
timet = time.localtime()
return " %s:%s:%s " % (timet.tm_hour, timet.tm_min, timet.tm_sec)
And use it as in:
>>> e= Event()
>>> e.time
' 17:25:8 '
>>> e.time
' 17:25:10 '
The mechanism is quite simple:
class Lazy:
def __init__(self, evaluate):
self.evaluate = evaluate
self.computed = False
def getresult(self):
if not self.computed:
self.result = self.evaluate()
self.computed = True
return self.result
Then, this utility can be used as:
def some_computation(a, b):
return ...
# bind the computation to its operands, but don't evaluate it yet.
lazy = Lazy(lambda: some_computation(1, 2))
# "some_computation()" is evaluated now.
print lazy.getresult()
# use the cached result again without re-computing.
print lazy.getresult()
This implementation uses callables to represent the computation, but there are many variations on this theme (e.g. a base class that requires you to imlement an evaluate() method, etc.).
Not sure about the specifics about which library you talking about but, from an algorithm standpoint, I've always used/undertsood it as follows: (psuedo code from a python novice)
class Object:
#... Other stuff ...
_actual_property = None;
def interface():
if _actual_property is None:
# Execute query and load up _actual_property
return _actual_property
Essentially because the interface and implementation are separated, you can define behaviors to execute upon request.

Categories