Python memoising/deferred lookup property decorator - python

Recently I've gone through an existing code base containing many classes where instance attributes reflect values stored in a database. I've refactored a lot of these attributes to have their database lookups be deferred, ie. not be initialised in the constructor but only upon first read. These attributes do not change over the lifetime of the instance, but they're a real bottleneck to calculate that first time and only really accessed for special cases. Hence they can also be cached after they've been retrieved from the database (this therefore fits the definition of memoisation where the input is simply "no input").
I find myself typing the following snippet of code over and over again for various attributes across various classes:
class testA(object):
def __init__(self):
self._a = None
self._b = None
#property
def a(self):
if self._a is None:
# Calculate the attribute now
self._a = 7
return self._a
#property
def b(self):
#etc
Is there an existing decorator to do this already in Python that I'm simply unaware of? Or, is there a reasonably simple way to define a decorator that does this?
I'm working under Python 2.5, but 2.6 answers might still be interesting if they are significantly different.
Note
This question was asked before Python included a lot of ready-made decorators for this. I have updated it only to correct terminology.

Here is an example implementation of a lazy property decorator:
import functools
def lazyprop(fn):
attr_name = '_lazy_' + fn.__name__
#property
#functools.wraps(fn)
def _lazyprop(self):
if not hasattr(self, attr_name):
setattr(self, attr_name, fn(self))
return getattr(self, attr_name)
return _lazyprop
class Test(object):
#lazyprop
def a(self):
print 'generating "a"'
return range(5)
Interactive session:
>>> t = Test()
>>> t.__dict__
{}
>>> t.a
generating "a"
[0, 1, 2, 3, 4]
>>> t.__dict__
{'_lazy_a': [0, 1, 2, 3, 4]}
>>> t.a
[0, 1, 2, 3, 4]

I wrote this one for myself... To be used for true one-time calculated lazy properties. I like it because it avoids sticking extra attributes on objects, and once activated does not waste time checking for attribute presence, etc.:
import functools
class lazy_property(object):
'''
meant to be used for lazy evaluation of an object attribute.
property should represent non-mutable data, as it replaces itself.
'''
def __init__(self, fget):
self.fget = fget
# copy the getter function's docstring and other attributes
functools.update_wrapper(self, fget)
def __get__(self, obj, cls):
if obj is None:
return self
value = self.fget(obj)
setattr(obj, self.fget.__name__, value)
return value
class Test(object):
#lazy_property
def results(self):
calcs = 1 # Do a lot of calculation here
return calcs
Note: The lazy_property class is a non-data descriptor, which means it is read-only. Adding a __set__ method would prevent it from working correctly.

For all sorts of great utilities I'm using boltons.
As part of that library you have cachedproperty:
from boltons.cacheutils import cachedproperty
class Foo(object):
def __init__(self):
self.value = 4
#cachedproperty
def cached_prop(self):
self.value += 1
return self.value
f = Foo()
print(f.value) # initial value
print(f.cached_prop) # cached property is calculated
f.value = 1
print(f.cached_prop) # same value for the cached property - it isn't calculated again
print(f.value) # the backing value is different (it's essentially unrelated value)

property is a class. A descriptor to be exact. Simply derive from it and implement the desired behavior.
class lazyproperty(property):
....
class testA(object):
....
a = lazyproperty('_a')
b = lazyproperty('_b')

Here's a callable that takes an optional timeout argument, in the __call__ you could also copy over the __name__, __doc__, __module__ from func's namespace:
import time
class Lazyproperty(object):
def __init__(self, timeout=None):
self.timeout = timeout
self._cache = {}
def __call__(self, func):
self.func = func
return self
def __get__(self, obj, objcls):
if obj not in self._cache or \
(self.timeout and time.time() - self._cache[key][1] > self.timeout):
self._cache[obj] = (self.func(obj), time.time())
return self._cache[obj]
ex:
class Foo(object):
#Lazyproperty(10)
def bar(self):
print('calculating')
return 'bar'
>>> x = Foo()
>>> print(x.bar)
calculating
bar
>>> print(x.bar)
bar
...(waiting 10 seconds)...
>>> print(x.bar)
calculating
bar

What you really want is the reify (source linked!) decorator from Pyramid:
Use as a class method decorator. It operates almost exactly like the Python #property decorator, but it puts the result of the method it decorates into the instance dict after the first call, effectively replacing the function it decorates with an instance variable. It is, in Python parlance, a non-data descriptor. The following is an example and its usage:
>>> from pyramid.decorator import reify
>>> class Foo(object):
... #reify
... def jammy(self):
... print('jammy called')
... return 1
>>> f = Foo()
>>> v = f.jammy
jammy called
>>> print(v)
1
>>> f.jammy
1
>>> # jammy func not called the second time; it replaced itself with 1
>>> # Note: reassignment is possible
>>> f.jammy = 2
>>> f.jammy
2

They added exactly what you're looking for in python 3.8
Transform a method of a class into a property whose value is computed once and then cached as a normal attribute for the life of the instance.
Similar to property(), with the addition of caching.
Use it just like #property :
#cached_property
def a(self):
self._a = 7
return self._a

There is a mix up of terms and/or confusion of concepts both in question and in answers so far.
Lazy evaluation just means that something is evaluated at runtime at the last possible moment when a value is needed. The standard #property decorator does just that.(*) The decorated function is evaluated only and every time you need the value of that property. (see wikipedia article about lazy evaluation)
(*)Actually a true lazy evaluation (compare e.g. haskell) is very hard to achieve in python (and results in code which is far from idiomatic).
Memoization is the correct term for what the asker seems to be looking for. Pure functions that do not depend on side effects for return value evaluation can be safely memoized and there is actually a decorator in functools #functools.lru_cache so no need for writing own decorators unless you need specialized behavior.

You can do this nice and easily by building a class from Python native property:
class cached_property(property):
def __init__(self, func, name=None, doc=None):
self.__name__ = name or func.__name__
self.__module__ = func.__module__
self.__doc__ = doc or func.__doc__
self.func = func
def __set__(self, obj, value):
obj.__dict__[self.__name__] = value
def __get__(self, obj, type=None):
if obj is None:
return self
value = obj.__dict__.get(self.__name__, None)
if value is None:
value = self.func(obj)
obj.__dict__[self.__name__] = value
return value
We can use this property class like regular class property ( It's also support item assignment as you can see)
class SampleClass():
#cached_property
def cached_property(self):
print('I am calculating value')
return 'My calculated value'
c = SampleClass()
print(c.cached_property)
print(c.cached_property)
c.cached_property = 2
print(c.cached_property)
print(c.cached_property)
Value only calculated first time and after that we used our saved value
Output:
I am calculating value
My calculated value
My calculated value
2
2

I agree with #jason
When I think about lazy evaluation, Asyncio immediately comes to mind.
The possibility of delaying the expensive calculation till the last minute is the sole benefit of lazy evaluation.
Caching / memozition on the other hand could be beneficial but on the expense that the calculation is static and won't change with time / inputs.
A practice I often do for expensive calculations of these sorts is to calculate then cache with TTL.

Related

Lazy class attribute initialization

I have a class that takes only one input argument. This value will then be used to compute a number of attributes (only one in the following example). What is a pythonic way if I wanted to take the computation place only if I call the attribute. In addition, the result should be cached and attr2 must not be set from outside the class.
class LazyInit:
def __init__(self, val):
self.attr1 = val
self.attr2 = self.compute_attr2()
def compute_attr2(self):
return self.attr1 * 2 # potentially costly computation
if __name__ == "__main__":
obj = LazyInit(10)
# actual computation should take place when calling the attribute
print(obj.attr2)
Make attr2 a property, not an instance attribute.
class LazyInit:
def __init__(self, val):
self.attr1 = val
self._attr2 = None
#property
def attr2(self):
if self._attr2 is None:
self._attr2 = self.compute_attr2()
return self._attr2
_attr2 is a private instance attribute that both indicates whether the value has been computed yet, and saves the computed value for future access.
As hinted above, just use #cached_property decorator.
from functools import cached_property
class LazyInit():
...
#cached_property
def attr2(self):
return <perform expensive computation>
Olvin Roght correctly points out that this solution doesn't make attr2 read-only the way that #property does. If that is important to you, another possibility would be to write:
...
#property
def attr2(self):
return self.__internal_attr2()
#functools.cached
def __internal_attr2(self):
return <perform expensive calculation>
In any case, Python provides libraries to help you ensure that a value is only calculated once. It is better to use them than to try and write your own.

Setting default values in a class

I am creating a class in Python, and I am unsure how to properly set default values. My goal is to set default values for all class instances, which can also be modified by a class method. However, I would like to have the initial default values restored after calling a method.
I have been able to make it work with the code shown below. It isn't very "pretty", so I suspect that are better approaches to this problem.
class plots:
def __init__(self, **kwargs):
self.default_attr = {'a': 1, 'b': 2, 'c': 3}
self.default_attr.update(kwargs)
self.__dict__.update((k, v) for k, v in self.default_attr.items())
def method1(self, **kwargs):
self.__dict__.update((k, v) for k, v in kwargs.items())
#### Code for this method goes here
# Then restore initial default values
self.__dict__.update((k, v) for k, v in self.default_attr.items())
When I use this class, I would do something like my_instance = plots() and my_instance.method1(), my_instance.method1(b = 5), and my_instance.method1(). When calling method1 the third time, b would be 5 if I don't reset the default values at the end of the method definition, but I would like it to be 2 again.
Note: the code above is just an example. The real class has dozens of default values, and using all of them as input arguments would be considered an antipattern.
Any suggestion on how to properly address this issue?
You can use class variables, and property to achieve your goal to set default values for all class instances. The instances values can be modified directly, and the initial default values restored after calling a method.
In view of the context that "the real class has dozens of default values", another approach that you may consider, is to set up a configuration file containing the default values, and using this file to initialize, or reset the defaults.
Here is a short example of the first approach using one class variable:
class Plots:
_a = 1
def __init__(self):
self._a = None
self.reset_default_values()
def reset_default_values(self):
self._a = Plots._a
#property
def a(self):
return self._a
#a.setter
def a(self, value):
self._a = value
plot = Plots()
print(plot.a)
plot.a = 42
print(plot.a)
plot.reset_default_values()
print(plot.a)
output:
1
42
1
There is a whole bunch of ways to solve this problem, but if you have python 3.7 installed (or have 3.6 and install the backport), dataclasses might be a good fit for a nice solution.
First of all, it lets you define the default values in a readable and compact manner, and also allows all the mutation operations you need:
>>> from dataclasses import dataclass
>>> #dataclass
... class Plots:
... a: int = 1
... b: int = 2
... c: int = 3
...
>>> p = Plots() # create a Plot with only default values
>>> p
Plots(a=1, b=2, c=3)
>>> p.a = -1 # update something in this Plot instance
>>> p
Plots(a=-1, b=2, c=3)
You also get the option to define default factories instead of default values for free with the dataclass field definition. It might not be a problem yet, but it avoids the mutable default value gotcha, which every python programmer runs into sooner or later.
Last but not least, writing a reset function is quite easy given an existing dataclass, because it keeps track of all the default values already in its __dataclass_fields__ attribute:
>>> from dataclasses import dataclass, MISSING
>>> #dataclass
... class Plots:
... a: int = 1
... b: int = 2
... c: int = 3
...
... def reset(self):
... for name, field in self.__dataclass_fields__.items():
... if field.default != MISSING:
... setattr(self, name, field.default)
... else:
... setattr(self, name, field.default_factory())
...
>>> p = Plots(a=-1) # create a Plot with some non-default values
>>> p
Plots(a=-1, b=2, c=3)
>>> p.reset() # calling reset on it restores the pre-defined defaults
>>> p
Plots(a=1, b=2, c=3)
So now you can write some function do_stuff(...) that updates the fields in a Plot instance, and as long as you execute reset() the changes won't persist.
You can use a context manager or a decorator to apply and reset the values without having to type the same code on each method.
Rather than having self.default_attr, I'd just return to the previous state.
Using a decorator you could get:
def with_kwargs(fn):
def inner(self, **kwargs):
prev = self.__dict__.copy()
try:
self.__dict__.update(kwargs)
ret = fn(self)
finally:
self.__dict__ = prev
return ret
return inner
class plots:
a = 1
b = 2
c = 3
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
#with_kwargs
def method1(self):
# Code goes here
IMHO this is a bad idea, and would at least suggest not mutating plots. You can do this by making a new object and passing that to method1 as self.
class Transparent:
pass
def with_kwargs(fn):
def inner(self, **kwargs):
new_self = Transparent()
new_self.__dict__ = {**self.__dict__, **kwargs}
return fn(new_self)
return inner

Expectation on property getter/setter in python with Doubles

I'm using doubles together with pytest to unit test some Python 2.7 code.
Let's say I have a class such as:
class Foo(object):
def bar(self, value):
pass
And that I have some code where an instance of Foo is injected and that invokes method:
def do_bar(foo):
foo.bar(22)
I know how to create an expectation
def test_doing_bar_sets_bar_to_22():
foo = Foo()
expect(foo).bar(22)
do_bar(foo)
But I wonder how I can create an expectation on a setter of a property defined with the #property decorator:
class Foo(object):
def bar(self):
pass
#property
def bee(self):
pass
#bee.setter
def bee(self, value):
pass
def do_bee(foo):
foo.bee = 22
To make this work:
def test_doing_bee_sets_bee_to_22():
foo = Foo()
# Something like expect(foo.bee).setter(22)
# I tried: expect(foo.__class__.bee).setter(foo, 22)
do_bee(foo)
I know I could simply create a class with a bee property that captures the value and pass that instance to the do_bee method, but if it is possible to resolve it through the expect approach, I prefer it for symmetry with the rest of the code and expressiveness.
Is it possible to do this with doubles?
UPDATE 2019-01-24
I did some research in the source code of doubles:
https://github.com/uber/doubles/blob/master/doubles/proxy_property.py
https://github.com/uber/doubles/blob/master/doubles/proxy_method.py
I believe ProxyProperty implementation will not allow creating allowances/expectations when the property has a setter. In particular, I believe the __set__ should be implemented, but I don't totally understand how property allowances/expectations is supposed to work (from a design perspective).
I could get it to work by changing the ProxyProperty implementation to:
class ProxyProperty(object):
def __init__(self, name, original):
self._name = name
self._original = original
def __get__(self, obj, objtype=None):
return obj.__dict__.get(self._name, self._original.__get__(obj, objtype))
def __set__(self, obj, value):
obj.__dict__[self._name] = value
(Also in the proxy_method.ProxyMethod._hijack_target I remove the self._target.obj.__dict__[double_name(self._method_name)] = self since it does no longer makes sense to me after my changes).
With those changes I can write:
def test_doing_bee_sets_bee_to_22():
foo = Foo()
allow(foo).bee
do_bee(foo)
assert foo.bee == 22
However I'd prefer to have a mechanism such as with_args() because that would allow creating multiple expectations if for example doo_bee set foo.bee multiple times.
UPDATE 2019-02-23
I opened an issue in their Github page and they recently marked it as a bug.

Same name for classmethod and instancemethod

I'd like to do something like this:
class X:
#classmethod
def id(cls):
return cls.__name__
def id(self):
return self.__class__.__name__
And now call id() for either the class or an instance of it:
>>> X.id()
'X'
>>> X().id()
'X'
Obviously, this exact code doesn't work, but is there a similar way to make it work?
Or any other workarounds to get such behavior without too much "hacky" stuff?
Class and instance methods live in the same namespace and you cannot reuse names like that; the last definition of id will win in that case.
The class method will continue to work on instances however, there is no need to create a separate instance method; just use:
class X:
#classmethod
def id(cls):
return cls.__name__
because the method continues to be bound to the class:
>>> class X:
... #classmethod
... def id(cls):
... return cls.__name__
...
>>> X.id()
'X'
>>> X().id()
'X'
This is explicitly documented:
It can be called either on the class (such as C.f()) or on an instance (such as C().f()). The instance is ignored except for its class.
If you do need distinguish between binding to the class and an instance
If you need a method to work differently based on where it is being used on; bound to a class when accessed on the class, bound to the instance when accessed on the instance, you'll need to create a custom descriptor object.
The descriptor API is how Python causes functions to be bound as methods, and bind classmethod objects to the class; see the descriptor howto.
You can provide your own descriptor for methods by creating an object that has a __get__ method. Here is a simple one that switches what the method is bound to based on context, if the first argument to __get__ is None, then the descriptor is being bound to a class, otherwise it is being bound to an instance:
class class_or_instancemethod(classmethod):
def __get__(self, instance, type_):
descr_get = super().__get__ if instance is None else self.__func__.__get__
return descr_get(instance, type_)
This re-uses classmethod and only re-defines how it handles binding, delegating the original implementation for instance is None, and to the standard function __get__ implementation otherwise.
Note that in the method itself, you may then have to test, what it is bound to. isinstance(firstargument, type) is a good test for this:
>>> class X:
... #class_or_instancemethod
... def foo(self_or_cls):
... if isinstance(self_or_cls, type):
... return f"bound to the class, {self_or_cls}"
... else:
... return f"bound to the instance, {self_or_cls"
...
>>> X.foo()
"bound to the class, <class '__main__.X'>"
>>> X().foo()
'bound to the instance, <__main__.X object at 0x10ac7d580>'
An alternative implementation could use two functions, one for when bound to a class, the other when bound to an instance:
class hybridmethod:
def __init__(self, fclass, finstance=None, doc=None):
self.fclass = fclass
self.finstance = finstance
self.__doc__ = doc or fclass.__doc__
# support use on abstract base classes
self.__isabstractmethod__ = bool(
getattr(fclass, '__isabstractmethod__', False)
)
def classmethod(self, fclass):
return type(self)(fclass, self.finstance, None)
def instancemethod(self, finstance):
return type(self)(self.fclass, finstance, self.__doc__)
def __get__(self, instance, cls):
if instance is None or self.finstance is None:
# either bound to the class, or no instance method available
return self.fclass.__get__(cls, None)
return self.finstance.__get__(instance, cls)
This then is a classmethod with an optional instance method. Use it like you'd use a property object; decorate the instance method with #<name>.instancemethod:
>>> class X:
... #hybridmethod
... def bar(cls):
... return f"bound to the class, {cls}"
... #bar.instancemethod
... def bar(self):
... return f"bound to the instance, {self}"
...
>>> X.bar()
"bound to the class, <class '__main__.X'>"
>>> X().bar()
'bound to the instance, <__main__.X object at 0x10a010f70>'
Personally, my advice is to be cautious about using this; the exact same method altering behaviour based on the context can be confusing to use. However, there are use-cases for this, such as SQLAlchemy's differentiation between SQL objects and SQL values, where column objects in a model switch behaviour like this; see their Hybrid Attributes documentation. The implementation for this follows the exact same pattern as my hybridmethod class above.
I have no idea what's your actual use case is, but you can do something like this using a descriptor:
class Desc(object):
def __get__(self, ins, typ):
if ins is None:
print 'Called by a class.'
return lambda : typ.__name__
else:
print 'Called by an instance.'
return lambda : ins.__class__.__name__
class X(object):
id = Desc()
x = X()
print x.id()
print X.id()
Output
Called by an instance.
X
Called by a class.
X
It can be done, quite succinctly, by binding the instance-bound version of your method explicitly to the instance (rather than to the class). Python will invoke the instance attribute found in Class().__dict__ when Class().foo() is called (because it searches the instance's __dict__ before the class'), and the class-bound method found in Class.__dict__ when Class.foo() is called.
This has a number of potential use cases, though whether they are anti-patterns is open for debate:
class Test:
def __init__(self):
self.check = self.__check
#staticmethod
def check():
print('Called as class')
def __check(self):
print('Called as instance, probably')
>>> Test.check()
Called as class
>>> Test().check()
Called as instance, probably
Or... let's say we want to be able to abuse stuff like map():
class Str(str):
def __init__(self, *args):
self.split = self.__split
#staticmethod
def split(sep=None, maxsplit=-1):
return lambda string: string.split(sep, maxsplit)
def __split(self, sep=None, maxsplit=-1):
return super().split(sep, maxsplit)
>>> s = Str('w-o-w')
>>> s.split('-')
['w', 'o', 'w']
>>> Str.split('-')(s)
['w', 'o', 'w']
>>> list(map(Str.split('-'), [s]*3))
[['w', 'o', 'w'], ['w', 'o', 'w'], ['w', 'o', 'w']]
"types" provides something quite interesting since Python 3.4: DynamicClassAttribute
It is not doing 100% of what you had in mind, but it seems to be closely related, and you might need to tweak a bit my metaclass but, rougly, you can have this;
from types import DynamicClassAttribute
class XMeta(type):
def __getattr__(self, value):
if value == 'id':
return XMeta.id # You may want to change a bit that line.
#property
def id(self):
return "Class {}".format(self.__name__)
That would define your class attribute. For the instance attribute:
class X(metaclass=XMeta):
#DynamicClassAttribute
def id(self):
return "Instance {}".format(self.__class__.__name__)
It might be a bit overkill especially if you want to stay away from metaclasses. It's a trick I'd like to explore on my side, so I just wanted to share this hidden jewel, in case you can polish it and make it shine!
>>> X().id
'Instance X'
>>> X.id
'Class X'
Voila...
In your example, you could simply delete the second method entirely, since both the staticmethod and the class method do the same thing.
If you wanted them to do different things:
class X:
def id(self=None):
if self is None:
# It's being called as a static method
else:
# It's being called as an instance method
(Python 3 only) Elaborating on the idea of a pure-Python implementation of #classmethod, we can declare an #class_or_instance_method as a decorator, which is actually a class implementing the attribute descriptor protocol:
import inspect
class class_or_instance_method(object):
def __init__(self, f):
self.f = f
def __get__(self, instance, owner):
if instance is not None:
class_or_instance = instance
else:
class_or_instance = owner
def newfunc(*args, **kwargs):
return self.f(class_or_instance, *args, **kwargs)
return newfunc
class A:
#class_or_instance_method
def foo(self_or_cls, a, b, c=None):
if inspect.isclass(self_or_cls):
print("Called as a class method")
else:
print("Called as an instance method")

Python class member lazy initialization

I would like to know what is the python way of initializing a class member but only when accessing it, if accessed.
I tried the code below and it is working but is there something simpler than that?
class MyClass(object):
_MY_DATA = None
#staticmethod
def _retrieve_my_data():
my_data = ... # costly database call
return my_data
#classmethod
def get_my_data(cls):
if cls._MY_DATA is None:
cls._MY_DATA = MyClass._retrieve_my_data()
return cls._MY_DATA
You could use a #property on the metaclass instead:
class MyMetaClass(type):
#property
def my_data(cls):
if getattr(cls, '_MY_DATA', None) is None:
my_data = ... # costly database call
cls._MY_DATA = my_data
return cls._MY_DATA
class MyClass(metaclass=MyMetaClass):
# ...
This makes my_data an attribute on the class, so the expensive database call is postponed until you try to access MyClass.my_data. The result of the database call is cached by storing it in MyClass._MY_DATA, the call is only made once for the class.
For Python 2, use class MyClass(object): and add a __metaclass__ = MyMetaClass attribute in the class definition body to attach the metaclass.
Demo:
>>> class MyMetaClass(type):
... #property
... def my_data(cls):
... if getattr(cls, '_MY_DATA', None) is None:
... print("costly database call executing")
... my_data = 'bar'
... cls._MY_DATA = my_data
... return cls._MY_DATA
...
>>> class MyClass(metaclass=MyMetaClass):
... pass
...
>>> MyClass.my_data
costly database call executing
'bar'
>>> MyClass.my_data
'bar'
This works because a data descriptor like property is looked up on the parent type of an object; for classes that's type, and type can be extended by using metaclasses.
This answer is for a typical instance attribute/method only, not for a class attribute/classmethod, or staticmethod.
For Python 3.8+, how about using the cached_property decorator? It memoizes.
from functools import cached_property
class MyClass:
#cached_property
def my_lazy_attr(self):
print("Initializing and caching attribute, once per class instance.")
return 7**7**8
For Python 3.2+, how about using both property and lru_cache decorators? The latter memoizes.
from functools import lru_cache
class MyClass:
#property
#lru_cache()
def my_lazy_attr(self):
print("Initializing and caching attribute, once per class instance.")
return 7**7**8
Credit: answer by Maxime R.
Another approach to make the code cleaner is to write a wrapper function that does the desired logic:
def memoize(f):
def wrapped(*args, **kwargs):
if hasattr(wrapped, '_cached_val'):
return wrapped._cached_val
result = f(*args, **kwargs)
wrapped._cached_val = result
return result
return wrapped
You can use it as follows:
#memoize
def expensive_function():
print "Computing expensive function..."
import time
time.sleep(1)
return 400
print expensive_function()
print expensive_function()
print expensive_function()
Which outputs:
Computing expensive function...
400
400
400
Now your classmethod would look as follows, for example:
class MyClass(object):
#classmethod
#memoize
def retrieve_data(cls):
print "Computing data"
import time
time.sleep(1) #costly DB call
my_data = 40
return my_data
print MyClass.retrieve_data()
print MyClass.retrieve_data()
print MyClass.retrieve_data()
Output:
Computing data
40
40
40
Note that this will cache just one value for any set of arguments to the function, so if you want to compute different values depending on input values, you'll have to make memoize a bit more complicated.
Consider the pip-installable Dickens package which is available for Python 3.5+. It has a descriptors package which provides the relevant cachedproperty and cachedclassproperty decorators, the usage of which is shown in the example below. It seems to work as expected.
from descriptors import cachedproperty, classproperty, cachedclassproperty
class MyClass:
FOO = 'A'
def __init__(self):
self.bar = 'B'
#cachedproperty
def my_cached_instance_attr(self):
print('Initializing and caching attribute, once per class instance.')
return self.bar * 2
#cachedclassproperty
def my_cached_class_attr(cls):
print('Initializing and caching attribute, once per class.')
return cls.FOO * 3
#classproperty
def my_class_property(cls):
print('Calculating attribute without caching.')
return cls.FOO + 'C'
Ring gives lru_cache-like interface but working with any kind of descriptor supports: https://ring-cache.readthedocs.io/en/latest/quickstart.html#method-classmethod-staticmethod
class Page(object):
(...)
#ring.lru()
#classmethod
def class_content(cls):
return cls.base_content
#ring.lru()
#staticmethod
def example_dot_com():
return requests.get('http://example.com').content
See the link for more details.

Categories