I have a baseclass that contains nummerical attributes that are simply passed to it on initialization as a dictionary and then added to the instance's dictionary:
class baseclass(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def calcValue(self):
return sum(vars(self).values())
Now I have a derieved class from this class, that adds additional attributes to the class, e.g.;
class childclass(baseclass):
def __init__(self, stringValue, **kwargs):
super(childclass, self).__init__(kwargs)
self.name = stringValue
Now I would like to have a function in my baseclass that only iterates over all attributes that were added to the class but not the one that were added as child attributes. For example if I create an instance of child class like this:
instance = childclass("myname", a=1, b=2, c=3)
and then call the calcValue method, it should return 1+2+3 = 6
instance.calcValue()
but since vars(self) will return the full dictionary, uncluding the string from the childclass attribute, which of course can then not be added. Is there a way to only acces the attributes of the instance that belong to the respective derieved class?
You are storing all your attributes as ordinary values on the instance's __dict__. Which means that without any further hints, they are indistinguishable one from another.
Python has a couple mechanisms to treat attributes in special manners. If you would declare the attributes in your base class in the class itself, and just init their values inside the __init__ method, it would be possible to introspect the base class' __dict__ (and not the instance's __dict__), or the __annotations__ attribute in the same class.
As it is in the example, though, one easy thing is to use an special attribute to take note of the attributes that are added on the base class, and you then consult this as the attributes' name source:
class baseclass(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
self._numeric_attrs = set(kwargs.keys())
def calcValue(self):
return sum(getattr(self, attr) for attr in self._numeric_attrs)
A simple, safe and effective - but with added overhead - would be to store the base class attributes in a distinct attribute and use __getattr__ to serve them:
class BaseClass(object):
def __init__(self, **kwargs):
self._attribs = kwargs
def __getattr__(self, name):
try:
return self._attribs[name]
except KeyError:
raise AttributeError("object {} has no attribute {}".format(type(self).__name__, name))
def calcValue(self):
return sum(self._attribs.values())
I usually try to avoid __getattr__ as it makes the code harder to inspect and maintain, but since your class has already no definite API it doesn't make much difference here.
Related
Here is a basic Metaclass intending to provide a custom __init__:
class D(type):
def __init__(cls, name, bases, attributes):
super(D, cls).__init__(name, bases, attributes)
for hook in R.registered_hooks:
setattr(cls, hook.__qualname__, hook)
The for loop iterates over a list of functions, calling setattr(cls, hook.__qualname__, hook) for each of it.
Here is a Class that use the above Metaclass:
class E(metaclass=D):
def __init__(self):
pass
The weird thing:
E().__dict__ prints {}
But when I call
`E.__dict__ prints {'f1': <function f1 at 0x001>, 'f2': <function f2 at 0x002>}`
I was expecting to add those function as attributes to the instance attributes, since the __init__ provides a custom initialization for a Class but it seems like the attributes were added to Class attributes.
What is the cause of this and how can I add attributes to the instance, in this scenario? Thanks!
They are added as instance attributes. The instance, though, is the class, as an instance of the metaclass. You aren't defining a __init__ method for E; you are defining the method that type uses to initialize E itself.
If you just want attributes added to the instance of E, you're working at the wrong level; you should define a mixin and use multiple inheritance for that.
For __init__ to be called on the instantiation of an object obj, you need to have defined type(obj).__init__. By using a metaclass, you defined type(type(obj)).__init__. You are working at the wrong level.
In this case, you only need inheritance, not a metaclass.
class D():
def __init__(self, *args):
print('D.__init__ was called')
class E(D):
pass
e = E() # prints: D.__init__ was called
Note that you will still find that e.__dict__ is empty.
print(e.__dict__) # {}
This is because you instances have no attributes, methods are stored and looked up in the class.
If you're using a metaclass, that means you're changing the definition of a class, which indirectly affects the attributes of instances of that class. Assuming you actually want to do this:
You're creating a custom __init__ method. As with normal instantiation, __init__ is called on an object that has been already created. Generally when you're doing something meaningful with metaclasses, you'll want to add to the class's dict before instantiation, which is done in the __new__ method to avoid issues that may come up in subclassing. It should look something like this:
class D(type):
def __new__(cls, name, bases, dct):
for hook in R.registered_hooks:
dct[hook.__qualname__] = hook
return super(D, cls).__new__(cls, name, bases, dct)
This will not modify the __dict__ of instances of that class, but attribute resolution for instances also looks for class attributes. So if you add foo as an attribute for E in the metaclass, E().foo will return the expected value even though it's not visible in E().__dict__.
In Python, I currently have instances of a class like MyClass('name1'), MyClass('name2') and so on.
I want to make it so that each instance has its own superclass, i.e., I want MyClass('name1') to be an instance of Name1MyClass and MyClass('name2') to be an instance of Name2MyClass. Name1MyClass and Name2MyClass would be dynamically generated subclasses of MyClass. I can't figure out how to do this, because it seems that Python always makes whatever is returned from __new__ an instance of that class. It isn't clear to me how to do it in a metaclass either.
The reason I want to do this is that I want to define __doc__ docstrings on the instances. But it seems that help completely ignores __doc__ on instances; it only looks on classes. So to put a different docstring on each instance, I need to make each instance have its own custom class.
I could be wrong, but I don't think you want a metaclass here. __metaclass__es are used when the class is created, not when you call the class to construct a new instance of the class (or something else).
Here's an answer using __new__ without a metaclass. It feels a bit hacky, but it seems to work:
_sentinel = Ellipsis
class MyClass(object):
def __new__(cls, name):
if name is _sentinel:
return object.__new__(cls)
else:
instance = type(name + cls.__name__, (MyClass,), {})(_sentinel)
# Initialization goes here.
return instance
print type(MyClass('name1'))
print type(MyClass('name2'))
There's a catch here -- All the business logic of initializing then new instance must be done in __new__. Since __new__ is returning a different type than the class it is bound to, __init__ won't get called.
Another option is to create a class factory:
class MyClass(object):
pass
def class_factory(name):
new_cls = type(name + MyClass.__name__, (MyClass,), {})
return new_cls() # Or pass whatever you want in here...
print type(class_factory('name1'))
print type(class_factory('name2'))
Finally, you could even create a non-__new__ class method:
class MyClass(object):
#classmethod
def class_factory(cls, name):
new_cls = type(name + cls.__name__, (cls,), {})
return new_cls() # Or pass whatever you want in here...
print type(MyClass.class_factory('name1'))
print type(MyClass.class_factory('name2'))
I am a python learner and currently hacking up a class with variable number of fields as in the "Bunch of Named Stuff" example here.
class Bunch:
def __init__(self, **kwds):
self.__dict__.update(kwds)
I also want to write a __setattr__ in this class in order to check the input attribute name. But, the python documentation says,
If __setattr__() wants to assign to an
instance attribute, it should not
simply execute "self.name = value" --
this would cause a recursive call to
itself. Instead, it should insert the
value in the dictionary of instance
attributes, e.g., "self.__dict__[name]
= value".
For new-style classes, rather than
accessing the instance dictionary, it
should call the base class method with
the same name, for example,
"object.__setattr__(self, name,
value)".
In that case, should I also use object.__dict__ in the __init__ function to replace self.__dict__?
You can use
class Bunch:
def __init__(self, **kwds):
self.__dict__.update(kwds)
def __setattr__(self, name, value):
#do your verification stuff
self.__dict__[name] = value
or with new-style class :
class Bunch(object):
def __init__(self, **kwds):
self.__dict__.update(kwds)
def __setattr__(self, name, value):
#do your verification stuff
super(Bunch, self).__setattr__(name, value)
No. You should define your class as class Bunch(object), but continue to refer to self.__dict__.
You only need to use the object.__setattr__ method while you are defining the self.__setattr__ method to prevent infinite recursion. __dict__ is not a method, but is an attribute on the object itself, so object.__dict__ would not work.
Goal: Make a decorator which can modify the scope that it is used in.
If it worked:
class Blah(): # or perhaps class Blah(ParentClassWhichMakesThisPossible)
def one(self):
pass
#decorated
def two(self):
pass
>>> Blah.decorated
["two"]
Why? I essentially want to write classes which can maintain specific dictionaries of methods, so that I can retrieve lists of available methods of different types on a per class basis. errr.....
I want to do this:
class RuleClass(ParentClass):
#rule
def blah(self):
pass
#rule
def kapow(self):
pass
def shazam(self):
class OtherRuleClass(ParentClass):
#rule
def foo(self):
pass
def bar(self):
pass
>>> RuleClass.rules.keys()
["blah", "kapow"]
>>> OtherRuleClass.rules.keys()
["foo"]
You can do what you want with a class decorator (in Python 2.6) or a metaclass. The class decorator version:
def rule(f):
f.rule = True
return f
def getRules(cls):
cls.rules = {}
for attr, value in cls.__dict__.iteritems():
if getattr(value, 'rule', False):
cls.rules[attr] = value
return cls
#getRules
class RuleClass:
#rule
def foo(self):
pass
The metaclass version would be:
def rule(f):
f.rule = True
return f
class RuleType(type):
def __init__(self, name, bases, attrs):
self.rules = {}
for attr, value in attrs.iteritems():
if getattr(value, 'rule', False):
self.rules[attr] = value
super(RuleType, self).__init__(name, bases, attrs)
class RuleBase(object):
__metaclass__ = RuleType
class RuleClass(RuleBase):
#rule
def foo(self):
pass
Notice that neither of these do what you ask for (modify the calling namespace) because it's fragile, hard and often impossible. Instead they both post-process the class -- through the class decorator or the metaclass's __init__ method -- by inspecting all the attributes and filling the rules attribute. The difference between the two is that the metaclass solution works in Python 2.5 and earlier (down to 2.2), and that the metaclass is inherited. With the decorator, subclasses have to each apply the decorator individually (if they want to set the rules attribute.)
Both solutions do not take inheritance into account -- they don't look at the parent class when looking for methods marked as rules, nor do they look at the parent class rules attribute. It's not hard to extend either to do that, if that's what you want.
Problem is, at the time the decorated decorator is called, there is no object Blah yet: the class object is built after the class body finishes executing. Simplest is to have decorated stash the info "somewhere else", e.g. a function attribute, then a final pass (a class decorator or metaclass) reaps that info into the dictionary you desire.
Class decorators are simpler, but they don't get inherited (so they wouldn't come from a parent class), while metaclasses are inherited -- so if you insist on inheritance, a metaclass it will have to be. Simplest-first, with a class decorator and the "list" variant you have at the start of your Q rather than the "dict" variant you have later:
import inspect
def classdecorator(aclass):
decorated = []
for name, value in inspect.getmembers(aclass, inspect.ismethod):
if hasattr(value, '_decorated'):
decorated.append(name)
del value._decorated
aclass.decorated = decorated
return aclass
def decorated(afun):
afun._decorated = True
return afun
now,
#classdecorator
class Blah(object):
def one(self):
pass
#decorated
def two(self):
pass
gives you the Blah.decorated list you request in the first part of your Q. Building a dict instead, as you request in the second part of your Q, just means changing decorated.append(name) to decorated[name] = value in the code above, and of course initializing decorated in the class decorator to an empty dict rather than an empty list.
The metaclass variant would use the metaclass's __init__ to perform essentially the same post-processing after the class body is built -- a metaclass's __init__ gets a dict corresponding to the class body as its last argument (but you'll have to support inheritance yourself by appropriately dealing with any base class's analogous dict or list). So the metaclass approach is only "somewhat" more complex in practice than a class decorator, but conceptually it's felt to be much more difficult by most people. I'll give all the details for the metaclass if you need them, but I'd recommend sticking with the simpler class decorator if feasible.
I have a class Parent. I want to define a __new__ for Parent so it does some magic upon instantiation (for why, see footnote). I also want children classes to inherit from this and other classes to get Parent's features. The Parent's __new__ would return an instance of a subclass of the child class's bases and the Parent class.
This is how the child class would be defined:
class Child(Parent, list):
pass
But now I don't know what __new__ to call in Parent's __new__. If I call object.__new__, the above Child example complains that list.__new__ should be called. But how would Parent know that? I made it work so it loops through all the __bases__, and call each __new__ inside a try: block:
class Parent(object):
def __new__(cls, *args, **kwargs):
# There is a special wrapper function for instantiating instances of children
# classes that passes in a 'bases' argument, which is the __bases__ of the
# Children class.
bases = kwargs.get('bases')
if bases:
cls = type('name', bases + (cls,), kwargs.get('attr', {}))
for base in cls.__mro__:
if base not in (cls, MyMainType):
try:
obj = base.__new__(cls)
break
except TypeError:
pass
return obj
return object.__new__(cls)
But this just looks like a hack. Surely, there must be a better way of doing this?
Thanks.
The reason I want to use __new__ is so I can return an object of a subclass that has some dynamic attributes (the magic __int__ attributes, etc) assigned to the class. I could have done this in __init__, but I would not be able to modify self.__class__ in __init__ if the new class has a different internal structure, which is the case here due to multiple inheritance.
I think this will get you what you want:
return super(Parent, cls).__new__(cls, *args, **kwargs)
and you won't need the bases keyword argument. Unless I'm getting you wrong and you're putting that in there on purpose.