So I was looking at a certain class which has the following property function. However, the property method itself doesn't describe the procedure but instead calls another function to do so as follows:
class Foo():
#property
def params(self):
return self._params()
#property
def target(self):
return self._target()
def _params(self):
return print("hello")
def _target(self):
return print("world")
What I am trying to understand if it is some sort of pattern? I have seen a similar thing in another class as well where the method with property decorator simply calls another method of same name with underscore in the beginning.
Note: I do know what is property decorator but don't understand why this specific way of underscoring aims to achieve.
Effectively, the property is being used as a shortcut for calling a method with a fixed set of arguments. As a slightly different example, consider
class Foo():
#property
def params(self):
return self._params(1, "foo", True)
def _params(self, x, y, z):
...
f = Foo()
Now, f.params is a shortcut for f._params(1, "foo", True). Whether that is worth doing depends on whether _params is used for anything other than implementing the body of the params getter. If it isn't, there's little real point in writing code like this.
I have subclassed the built-in property class (call it SpecialProperty) in order to add more fields to it:
class SpecialProperty(property):
extra_field_1 = None
extra_field_2 = None
def __init__(self, fget=None, fset=None, fdel=None, doc=None):
super().__init__(fget, fset, fdel, doc)
def make_special_property(func):
prop = SerialisableProperty(fget=func)
return prop
and I am able to use it in the same fashion as the built-in property() decorator:
#my_module.make_special_property
def get_my_property(self): return self._my_property
I now want to further specialise my SpecialProperty instances populating one of the extra fields I have added to the class with an arbitrary value.
Is it possible, in Python, to write a decorator that will return a property with also accepting extra parameters?
I'd like to do it via the decorator because this is where and when the information is most relevant, however I'm finding myself stuck. I suspect this falls under the domain of decorators with arguments that have been well documented (Decorators with arguments? (Stack Overflow), or Python Decorators II: Decorator Arguments (artima.com) to only cite a couple sources), however I find myself unable to apply the same pattern to my case.
Here's how I'm trying to write it:
#my_module.make_special_property("example string")
def get_my_property(self): return self._my_property
And on the class declaring get_my_property:
>>> DeclaringClass.my_property
<SpecialProperty object at 0x...>
>>> DeclaringClass.my_property.extra_field_1
'example string'
Since I am making properties, the decorated class member should be swapped with an instance of SpecialProperty, and hence should not be a callable anymore -- thus, I am unable to apply the "nested wrapper" pattern for allowing a decorator with arguments.
Non working example:
def make_special_property(custom_arg_1):
def wrapper(func):
prop = SerialisableProperty(fget=func)
prop.extra_field_1 = custom_arg_1
return prop
return wrapper # this returns a callable (function)
I shouldn't have a callable be returned here, if I want a property I should have a SpecialProperty instance be returned, but I can't call return wrapper(func) for obvious reasons.
Your decorator doesn't return a callable. Your decorator factory returns a decorator, which returns a property. You might understand better if you rename the functions:
def make_decorator(custom_arg_1):
def decorator(func):
prop = SerialisableProperty(fget=func)
prop.extra_field_1 = custom_arg_1
return prop
return decorator
When you decorate with make_decorator, it is called with an argument, and decorator is returned and called on the decorated function.
I'm trying to write a class method decorator that modifies its class' state. I'm having troubles implementing it at the moment.
Side question: When does a decorator get called? Does it load when the class is instantiated or on during read time when the class read?
What I'm trying to do is this:
class ObjMeta(object):
methods = []
# This should be a decorator that magically updates the 'methods'
# attribute (or list) of this class that's being read by the proxying
# class below.
def method_wrapper(method):
#functools.wraps(method)
def wrapper(*args, **kwargs):
ObjMeta.methods.append(method.__name__)
return method(*args, **kwargs)
return wrapper
# Our methods
#method_wrapper
def method1(self, *args):
return args
#method_wrapper
def method2(self, *args):
return args
class Obj(object):
klass = None
def __init__(self, object_class=ObjMeta):
self.klass = object_class
self._set_methods(object_class)
# We dynamically load the method proxies that calls to our meta class
# that actually contains the methods. It's actually dependent to the
# meta class' methods attribute that contains a list of names of its
# existing methods. This is where I wanted it to be done automagically with
# the help of decorators
def _set_methods(self, object_class):
for method_name in object_class:
setattr(self, method_name, self._proxy_method(method_name))
# Proxies the method that's being called to our meta class
def _proxy_method(self, method_name):
def wrapper(*fargs, **fkwargs):
return getattr(self.klass(*fargs, **fkwargs), method_name)
return wrapper()
I think it's ugly to write a list of methods manually in the class so perhaps a decorator would fix this.
It's for an open-source project I'm working that ports underscore.js to python. I understand that it says I should just use itertools or something. I'm just doing this just for the love of programming and learning. BTW, project is hosted here
Thanks!
There are a few things wrong here.
Anything inside the inner wrapper is called when the method itself is called. Basically, you're replacing the method with that function, which wraps the original. So, your code as it stands would add the method name to the list each time it is called, which probably isn't what you want. Instead, that append should be at the method_wrapper level, ie outside of the inner wrapper. This is called when the method is defined, which happens the first time the module containing the class is imported.
The second thing wrong is that you never actually call the method - you simply return it. Instead of return method you should be returning the value of calling the method with the supplied args - return method(*args, **kwargs).
Ok, here is the real world scenario: I'm writing an application, and I have a class that represents a certain type of files (in my case this is photographs but that detail is irrelevant to the problem). Each instance of the Photograph class should be unique to the photo's filename.
The problem is, when a user tells my application to load a file, I need to be able to identify when files are already loaded, and use the existing instance for that filename, rather than create duplicate instances on the same filename.
To me this seems like a good situation to use memoization, and there's a lot of examples of that out there, but in this case I'm not just memoizing an ordinary function, I need to be memoizing __init__(). This poses a problem, because by the time __init__() gets called it's already too late as there's a new instance created already.
In my research I found Python's __new__() method, and I was actually able to write a working trivial example, but it fell apart when I tried to use it on my real-world objects, and I'm not sure why (the only thing I can think of is that my real world objects were subclasses of other objects that I can't really control, and so there were some incompatibilities with this approach). This is what I had:
class Flub(object):
instances = {}
def __new__(cls, flubid):
try:
self = Flub.instances[flubid]
except KeyError:
self = Flub.instances[flubid] = super(Flub, cls).__new__(cls)
print 'making a new one!'
self.flubid = flubid
print id(self)
return self
#staticmethod
def destroy_all():
for flub in Flub.instances.values():
print 'killing', flub
a = Flub('foo')
b = Flub('foo')
c = Flub('bar')
print a
print b
print c
print a is b, b is c
Flub.destroy_all()
Which output this:
making a new one!
139958663753808
139958663753808
making a new one!
139958663753872
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb050>
<__main__.Flub object at 0x7f4aaa6fb090>
True False
killing <__main__.Flub object at 0x7f4aaa6fb050>
killing <__main__.Flub object at 0x7f4aaa6fb090>
It's perfect! Only two instances were made for the two unique id's given, and Flub.instances clearly only has two listed.
But when I tried to take this approach with the objects I was using, I got all kinds of nonsensical errors about how __init__() took only 0 arguments, not 2. So I'd change some things around and then it would tell me that __init__() needed an argument. Totally bizarre.
After a while of fighting with it, I basically just gave up and moved all the __new__() black magic into a staticmethod called get, such that I could call Photograph.get(filename) and it would only call Photograph(filename) if filename wasn't already in Photograph.instances.
Does anybody know where I went wrong here? Is there some better way to do this?
Another way of thinking about it is that it's similar to a singleton, except it's not globally singleton, just singleton-per-filename.
Here's my real-world code using the staticmethod get if you want to see it all together.
Let us see two points about your question.
Using memoize
You can use memoization, but you should decorate the class, not the __init__ method. Suppose we have this memoizator:
def get_id_tuple(f, args, kwargs, mark=object()):
"""
Some quick'n'dirty way to generate a unique key for an specific call.
"""
l = [id(f)]
for arg in args:
l.append(id(arg))
l.append(id(mark))
for k, v in kwargs:
l.append(k)
l.append(id(v))
return tuple(l)
_memoized = {}
def memoize(f):
"""
Some basic memoizer
"""
def memoized(*args, **kwargs):
key = get_id_tuple(f, args, kwargs)
if key not in _memoized:
_memoized[key] = f(*args, **kwargs)
return _memoized[key]
return memoized
Now you just need to decorate the class:
#memoize
class Test(object):
def __init__(self, somevalue):
self.somevalue = somevalue
Let us see a test?
tests = [Test(1), Test(2), Test(3), Test(2), Test(4)]
for test in tests:
print test.somevalue, id(test)
The output is below. Note that the same parameters yield the same id of the returned object:
1 3072319660
2 3072319692
3 3072319724
2 3072319692
4 3072319756
Anyway, I would prefer to create a function to generate the objects and memoize it. Seems cleaner to me, but it may be some irrelevant pet peeve:
class Test(object):
def __init__(self, somevalue):
self.somevalue = somevalue
#memoize
def get_test_from_value(somevalue):
return Test(somevalue)
Using __new__:
Or, of course, you can override __new__. Some days ago I posted an answer about the ins, outs and best practices of overriding __new__ that can be helpful. Basically, it says to always pass *args, **kwargs to your __new__ method.
I, for one, would prefer to memoize a function which creates the objects, or even write a specific function which would take care of never recreating a object to the same parameter. Of course, however, this is mostly a opinion of mine, not a rule.
The solution that I ended up using is this:
class memoize(object):
def __init__(self, cls):
self.cls = cls
self.__dict__.update(cls.__dict__)
# This bit allows staticmethods to work as you would expect.
for attr, val in cls.__dict__.items():
if type(val) is staticmethod:
self.__dict__[attr] = val.__func__
def __call__(self, *args):
key = '//'.join(map(str, args))
if key not in self.cls.instances:
self.cls.instances[key] = self.cls(*args)
return self.cls.instances[key]
And then you decorate the class with this, not __init__. Although brandizzi provided me with that key piece of information, his example decorator didn't function as desired.
I found this concept quite subtle, but basically when you're using decorators in Python, you need to understand that the thing that gets decorated (whether it's a method or a class) is actually replaced by the decorator itself. So for example when I'd try to access Photograph.instances or Camera.generate_id() (a staticmethod), I couldn't actually access them because Photograph doesn't actually refer to the original Photograph class, it refers to the memoized function (from brandizzi's example).
To get around this, I had to create a decorator class that actually took all the attributes and static methods from the decorated class and exposed them as it's own. Almost like a subclass, except that the decorator class doesn't know ahead of time what classes it will be decorating, so it has to copy the attributes over after the fact.
The end result is that any instance of the memoize class becomes an almost transparent wrapper around the actual class that it has decorated, with the exception that attempting to instantiate it (but really calling it) will provide you with cached copies when they're available.
The parameters to __new__ also get passed to __init__, so:
def __init__(self, flubid):
...
You need to accept the flubid argument there, even if you don't use it in __init__
Here is the relevant comment taken from typeobject.c in Python2.7.3
/* You may wonder why object.__new__() only complains about arguments
when object.__init__() is not overridden, and vice versa.
Consider the use cases:
1. When neither is overridden, we want to hear complaints about
excess (i.e., any) arguments, since their presence could
indicate there's a bug.
2. When defining an Immutable type, we are likely to override only
__new__(), since __init__() is called too late to initialize an
Immutable object. Since __new__() defines the signature for the
type, it would be a pain to have to override __init__() just to
stop it from complaining about excess arguments.
3. When defining a Mutable type, we are likely to override only
__init__(). So here the converse reasoning applies: we don't
want to have to override __new__() just to stop it from
complaining.
4. When __init__() is overridden, and the subclass __init__() calls
object.__init__(), the latter should complain about excess
arguments; ditto for __new__().
Use cases 2 and 3 make it unattractive to unconditionally check for
excess arguments. The best solution that addresses all four use
cases is as follows: __init__() complains about excess arguments
unless __new__() is overridden and __init__() is not overridden
(IOW, if __init__() is overridden or __new__() is not overridden);
symmetrically, __new__() complains about excess arguments unless
__init__() is overridden and __new__() is not overridden
(IOW, if __new__() is overridden or __init__() is not overridden).
However, for backwards compatibility, this breaks too much code.
Therefore, in 2.6, we'll *warn* about excess arguments when both
methods are overridden; for all other cases we'll use the above
rules.
*/
Was trying to figure this out as well and I put together a solution that combines some tips from other StackOverflow questions (links in the code comments).
If anyone still needs, try this out:
import functools
from collections import OrderedDict
def memoize(f):
class Memoized:
def __init__(self, func):
self._f = func
self._cache = {}
# Make the Memoized class masquerade as the object we are memoizing.
# Preserve class attributes
functools.update_wrapper(self, func)
# Preserve static methods
# From https://stackoverflow.com/questions/11174362
for k, v in func.__dict__.items():
self.__dict__[k] = v.__func__ if type(v) is staticmethod else v
def __call__(self, *args, **kwargs):
# Generate key
key = (args)
if kwargs:
key += (object())
for k, v in kwargs.items():
key += (hash(k))
key += (hash(v))
key = hash(key)
if key in self._cache:
return self._cache[key]
else:
self._cache[key] = self._f(*args, **kwargs)
return self._cache[key]
def __get__(self, instance, owner):
"""
From https://stackoverflow.com/questions/30104047/how-can-i-decorate-an-instance-method-with-a-decorator-class
"""
return functools.partial(self.__call__, instance)
def __instancecheck__(self, other):
"""Make isinstance() work"""
return isinstance(other, self._f)
return Memoized(f)
Then you can use like so:
#memoize
class Test:
def __init__(self, value):
self._value = value
#property
def value(self):
return self._value
Uploaded the full thing with documentation to: https://github.com/spoorn/nemoize
I have done a decorator that I used to ensure that the keyword arguments passed to a constructor are the correct/expected ones. The code is the following:
from functools import wraps
def keyargs_check(keywords):
"""
This decorator ensures that the keys passed in kwargs are the onces that
are specified in the passed tuple. When applied this decorate will
check the keywords and will throw an exception if the developer used
one that is not recognized.
#type keywords: tuple
#param keywords: A tuple with all the keywords recognized by the function.
"""
def wrap(f):
#wraps(f)
def newFunction(*args, **kw):
# we are going to add an extra check in kw
for current_key in kw.keys():
if not current_key in keywords:
raise ValueError(
"The key {0} is a not recognized parameters by {1}.".format(
current_key, f.__name__))
return f(*args, **kw)
return newFunction
return wrap
An example use of this decorator would be the following:
class Person(object):
#keyargs_check(("name", "surname", "age"))
def __init__(self, **kwargs):
# perform init according to args
Using the above code if the developer passes a key args like "blah" it will throw an exception. Unfortunately my implementation has a major problem with inheritance, if I define the following:
class PersonTest(Person):
#keyargs_check(("test"))
def __init__(self, **kwargs):
Person.__init__(self,**kwargs)
Because I'm passing kwargs to the super class init method, I'm going to get an exception because "test" is not in the tuple passed to the decorator of the super class. Is there a way to let the decorator used in the super class to know about the extra keywords? or event better, is there a standard way to achieve what I want?
Update: I am more interested in automate the way I throw an exception when a developer passes the wrong kwarg rather than on the fact that I use kwargs instead of args. What I mean is, I do not want have to write the code that check the args passed to the method in every class.
Your decorator is not necessary. The only thing the decorator does that can't be done with the standard syntax is prevent keyword args from absorbing positional arguments. Thus
class Base(object):
def __init__(name=None,surname=None,age=None):
#some code
class Child(Base):
def __init__(test=None,**kwargs):
Base.__init__(self,**kwargs)
The advantage of this is that kwargs in Child will not contain test. The problem is that you can muck it up with a call like c = Child('red herring'). This is fixed in python 3.0.
The problem with your approach is that you're trying to use a decorator to do a macro's job, which is unpythonic. The only thing that will get you what you want is something that modifies the locals of the innermost function (f in your code, specifically the kwargs variable). How should your decorator know the wrapper's insides, how would it know that it calls a superclass?