How can I perform the equivalent of __setattr__ on an old-style class?
If what you want to do is set the attribute for an instance of old style class, you can use the setattr built-in function, it should work for old-style classes as well . Example -
>>> class Foo:
... def __init__(self,blah):
... self.blah=blah
...
>>> foo = Foo("Something")
>>> foo.blah
'Something'
>>> setattr(foo,'blah','somethingElse')
>>> foo.blah
'somethingElse'
You should use the built-in function for instance any type of class.
Since the original question accepts "...equivalent methods," I'd like to demonstrate the proper means of implementing the special __setattr__() method in old-style classes.
tl;dr
Use self.__dict__[attr_name] = attr_value in the __setattr__(self, attr_name, attr_value) method of an old-style class.
__setattr__() Meets Old-style Class
Interestingly, both Python 2.7 and 3 do call __setattr__() methods defined by old-style classes. Unlike new-style classes, however, old-style classes provide no default __setattr__() method. To no one's surprise, this hideously complicates __setattr__() methods in old-style classes.
In the subclass __setattr__() of a new-style class, the superclass __setattr__() is usually called at the end to set the desired attribute. In the subclass __setattr__() of an old-style class, this usually raises an exception; in most cases, there is no superclass __setattr__(). Instead, the desired key-value pair of the special __dict__ instance variable must be manually set.
Example or It Didn't Happen
Consider a great old-style class resembling the phrase "The Black Goat of the Woods with a Thousand Young" and defining __setattr__() to prefix the passed attribute name by la_:
class ShubNiggurath:
def __setattr__(self, attr_name, attr_value):
# Do not ask why. It is not of human purport.
attr_name = 'la_' + attr_name
# Make it so. Do not call
# super(ShubNiggurath, self).__setattr__(attr_name, attr_value), for no
# such method exists.
self.__dict__[attr_name] = attr_value
Asymmetries in the Darkness
Curiously, old-style classes do provide a default __getattr__() method. How Python 2.7 permitted this obscene asymmetry to stand bears no thinking upon – for it is equally hideous and shameful!
But it is.
Related
A method defined on a metaclass is accessible by classes that use the metaclass. However, the method will not be accessible on the instances of these classes.
My first guess was that metaclass methods would not be accessible on either classes or instances.
My second guess was that metaclass methods would be accessible on both classes and instances.
I find it surprising that metaclass methods are instead accessible on classes, but not on instances.
What is the purpose of this behavior? Is there any case where I can use this to an advantage? If there is no intended purpose, how does the implementation work such that this is the resulting behavior?
class Meta(type):
def __new__(mcs, name, bases, dct):
mcs.handle_foo(dct)
return type.__new__(mcs, name, bases, dct)
#classmethod
def handle_foo(mcs, dct):
"""
The sole purpose of this method is to encapsulate some logic
instead of writing code directly in __new__,
and also that subclasses of the metaclass can override this
"""
dct['foo'] = 1
class Meta2(Meta):
#classmethod
def handle_foo(mcs, dct):
"""Example of Metaclass' subclass overriding"""
dct['foo'] = 10000
class A(metaclass=Meta):
pass
class B(metaclass=Meta2):
pass
assert A.foo == 1
assert B.foo == 10000
assert hasattr(A, 'handle_foo')
assert hasattr(B, 'handle_foo')
# What is the purpose or reason of this method being accessible on A and B?
# If there is no purpose, what about the implementation explains why it is accessible here?
instance = A()
assert not hasattr(instance, 'handle_foo')
# Why is this method not accessible on the instance, when it is on the class?
# What is the purpose or reason for this method not being accessible on the instance?
What is the purpose of this behavior? What use case is this behavior intended to support? I am interested in a direct quote from the documentation, if one exists.
If there is no purpose, and this is simply a byproduct of the implementation, why does the implementation result in this behavior? I.e., how are metaclasses implemented such that the methods defined on the metaclass are also defined accessible on classes that use the metaclass, but not the instantiated objects of these classes?
There is only one practical implication of this that I have found is the following: Pycharm will include these metaclass functions in the code completion box when you start typing A. (i.e., the class). I don't want users of my framework to see this. One way to mitigate this as by renaming these methods as private methods (e.g. _handle_foo), but still I would rather these methods not show up in code completion at all. Using a dunder naming convention (__) won't work, as subclasses of the metaclass will not be able to override the methods.
(I've edited this post extensively due to the thoughtful feedback from Miyagi and Serge, in order to make it more clear as to why I am defining methods on the metaclass in the first place: simply in order to encapsulate some behavior instead of putting all the code in __new__, and to allow those methods to be overridden by subclasses of the metaclass)
Let us first look at this in a non-meta situation: We define a function inside a class and access it via the instance.
>>> class Foo:
... def bar(self): ...
...
>>> Foo.bar
<function __main__.Foo.bar(self)>
>>> foo = Foo()
>>> foo.bar
<bound method Foo.bar of <__main__.Foo object at 0x10dc75790>>
Of note is that the two "attributes" are not the same kind: The class' attribute is the very thing we put into it, but the instance's "attribute" is a dynamically created thing.
Likewise, methods defined on a metaclass are not inherited by the class, they are (dynamically) bound to its classes.
>>> Meta.meta_method # direct access to "class" attribute
<function __main__.Meta.meta_method(cls)>
>>> Foo.meta_method # instance access to "class" attribute
<bound method Meta.meta_method of <class '__main__.Foo'>>
This is the exact same mechanism – because a class is "just" a metaclass' instance.
It should be obvious at this point that the attributes defined on the metaclass and dynamically bound to the class are not the same thing, and there is no reason for them to behave the same. Whether lookup of attributes on an instance picks up metaclass-methods from their dynamic form on the class, directly from the metaclass or not at all is a judgement call.
Python's data model defines that default lookup only takes into account the instance and the instance's type. The instance's type's type is explicitly excluded.
Invoking Descriptors
[…]
The default behavior for attribute access is to get, set, or delete the attribute from an object’s dictionary. For instance, a.x has a lookup chain starting with a.__dict__['x'], then type(a).__dict__['x'], and continuing through the base classes of type(a) excluding metaclasses.
There is no rationale given for this approach. However, it is sufficient to replicate common instantiation+inheritance behaviour of other languages. At the same time, it avoids arbitrarily deep lookups and the issue that type is a recursive metaclass.
Notably, since a metaclass is in full control of how a class behaves, it can directly define methods in the class or redefine attribute access to circumvent the default behaviour.
'Every thing in python is an object'
So, should all objects have to have attributes and methods ?
I read below statemts in tutorial site, could you give example of pre-defined object in python that has neither attributes nor methods ?
Some objects have neither attributes nor methods
Everything in Python is indeed an object, this is true. Even classes themselves are considered to be objects and they're indeed the product of the builtin class typewhich is not surprisingly an object too. But objects will almost certainly inherit attributes including methods or data in most circumstances.
So, should all objects have to have attributes and methods ?
Not necessarily all objects have their own attributes. For instance, an object can inherit the attribute from its class and its class's superclasses, but that attribute or method doesn't necessarily live within the instance's namespace dictionary. In fact, instances' namespaces could be empty like the following:
class Foo:
pass
a = A()
print(a.__dict__)
a here doesn't have any attributes aside from those inherited from its class so if you check its namespace through the builtin attribute __dict__ you'll find the namespace to be an empty dictionary. But you might wonder isn't a.__dict__ an attribute of a? Make a distinction between class-level attributes--attributes inherited from the class or its superclasses and instance-attributes--attributes that belong to the instance and usually live in its namespace __dict__.
Could you give example of pre-defined object in python that has neither attributes nor methods ?
If you meant by predefined object, a builtin object, I couldn't imagine such scenario. Again, even if there are no attributes at the object itself, there would be attributes inherited from its class or the class's superclasses if there's any superclass in most cases. Probably and I'm guessing here, the tutorial is asking you to create class that assigns no attributes to its objects, just like the code I included above.
And this already answers your question better: Is everything an object in python like ruby?
There's a hackish way to emulate a Python object with no attributes.
class NoAttr(object):
def __getattribute__(self, attr):
raise AttributeError("no attribute: %s" % attr)
def __setattr__(self, attr, value):
raise AttributeError("can't set attribute: %s" % attr)
def __delattr__(self, attr):
raise AttributeError("no attribute: %s" % attr)
a = NoAttr()
This instance a, for all intents and purposes, in pure Python, behaves like an object with no attributes (you can try hasattr on it).
There may be a low-level way to do this in a C extension by implementing a type in C that pathologically stops Python's object implementation from working. Anyway the margin here is too small for writing one.
A pre-defined object with no attributes would defeat the purpose of pre-defining it.
According to Python 2.7.12 documentation:
If __setattr__() wants to assign to an instance attribute, it should
not simply execute self.name = value — this would cause a recursive
call to itself. Instead, it should insert the value in the dictionary
of instance attributes, e.g., self.__dict__[name] = value. For
new-style classes, rather than accessing the instance dictionary, it
should call the base class method with the same name, for example,
object.__setattr__(self, name, value).
However, the following code works as one would expect:
class Class(object):
def __setattr__(self, name, val):
self.__dict__[name] = val;
c = Class()
c.val = 42
print c.val
I know super(Class, obj).__setattr__(name, value) can ensure the __setattr__ methods of all base classes to be called, but classic class can also inherit from bases classes. So why is it only recommended for new style classes?
Or, on the other hand, why is doing so not recommended for classic classes?
New-style classes could be using slots, at which point there is no __dict__ to assign to. New-style classes also support other data descriptors, objects defined on the class that handle attribute setting or deletion for certain names.
From the documentation on slots:
By default, instances of both old and new-style classes have a dictionary for attribute storage. This wastes space for objects having very few instance variables. The space consumption can become acute when creating large numbers of instances.
The default can be overridden by defining __slots__ in a new-style class definition. The __slots__ declaration takes a sequence of instance variables and reserves just enough space in each instance to hold a value for each variable. Space is saved because __dict__ is not created for each instance.
Access to slots is instead implemented by adding data descriptors on the class; an object with __set__ and / or __del__ methods for each such attribute.
Another example of data descriptors are property() objects that have a setter or deleter function attached. Setting a key with the same name as such a descriptor object in the __dict__ would be ignored as data descriptors cause attribute lookup to bypass the __dict__ altogether.
object.__setattr__() knows how to handle data descriptors, which is why you should just call that.
In python 2.x take the following class:
class Person:
def __init__(self, name):
self.name = name
def myrepr(self):
return str(self.name)
def __getattr__(self, attr):
print('Fetching attr: %s' % attr)
if attr=='__repr__':
return self.myrepr
Now if you create an instance and echo it in the shell (to call __repr__), like
p = Person('Bob')
p
You get
Fetching attr: __repr__
Bob
Without the __getattr__ overload you'd have just got the default <__main__.A instance at 0x7fb8800c6e18> kind of thing.
My question is why is the built-in __getattr__ even capable of handling calls to __repr__ (and other builtins like that) if they are not elsewhere defined. These are not attributes they are operators, more like methods..
(This no longer works in python 3.x so I guess they got rid of the handling by __getattr__ of the builtins.)
It's not a Python 2 or 3 thing, it's a new-style vs old-style class thing. In old-style classes these methods had no special meaning, they were treated like simple attributes.
In new-style classes the special methods are always looked up in the class(implicit lookup) not instance.
For new-style classes, implicit invocations of special methods are
only guaranteed to work correctly if defined on an object’s type, not
in the object’s instance dictionary.
Old-style classes:
For old-style classes, special methods are always looked up in exactly
the same way as any other method or attribute. This is the case
regardless of whether the method is being looked up explicitly as in
x.__getitem__(i) or implicitly as in x[i].
Related: Overriding special methods on an instance
Following this answer it seems that a class' metaclass may be changed after the class has been defined by using the following*:
class MyMetaClass(type):
# Metaclass magic...
class A(object):
pass
A = MyMetaClass(A.__name__, A.__bases__, dict(A.__dict__))
Defining a function
def metaclass_wrapper(cls):
return MyMetaClass(cls.__name__, cls.__bases__, dict(cls.__dict__))
allows me to apply a decorator to a class definition like so,
#metaclass_wrapper
class B(object):
pass
It seems that the metaclass magic is applied to B, however B has no __metaclass__ attribute. Is the above method a sensible way to apply metaclasses to class definitions, even though I am definiting and re-definiting a class, or would I be better off simply writing
class B(object):
__metaclass__ = MyMetaClass
pass
I presume there are some differences between the two methods.
*Note, the original answer in the linked question, MyMetaClass(A.__name__, A.__bases__, A.__dict__), returns a TypeError:
TypeError: type() argument 3 must be a dict, not dict_proxy
It seems that the __dict__ attribute of A (the class definition) has a type dict_proxy, whereas the type of the __dict__ attribute of an instance of A has a type dict. Why is this? Is this a Python 2.x vs. 3.x difference?
Admittedly, I am a bit late to the party. However, I fell this was worth adding.
This is completely doable. That being said, there are plenty of other ways to accomplish the same goal. However, the decoration solution, in particular, allows for delayed evaluation ( obj = dec(obj) ), which using __metaclass__ inside the class does not. In typical decorator style, my solution is below.
There is a tricky thing that you may run into if you just construct the class without changing the dictionary or copying its attributes. Any attributes that the class had previously (before decorating) will appear to be missing. So, it is absolutely essential to copy these over and then tweak them as I have in my solution.
Personally, I like to be able to keep track of how an object was wrapped. So, I added the __wrapped__ attribute, which is not strictly necessary. It also makes it more like functools.wraps in Python 3 for classes. However, it can be helpful with introspection. Also, __metaclass__ is added to act more like the normal metaclass use case.
def metaclass(meta):
def metaclass_wrapper(cls):
__name = str(cls.__name__)
__bases = tuple(cls.__bases__)
__dict = dict(cls.__dict__)
for each_slot in __dict.get("__slots__", tuple()):
__dict.pop(each_slot, None)
__dict["__metaclass__"] = meta
__dict["__wrapped__"] = cls
return(meta(__name, __bases, __dict))
return(metaclass_wrapper)
For a trivial example, take the following.
class MetaStaticVariablePassed(type):
def __new__(meta, name, bases, dct):
dct["passed"] = True
return(super(MetaStaticVariablePassed, meta).__new__(meta, name, bases, dct))
#metaclass(MetaStaticVariablePassed)
class Test(object):
pass
This yields the nice result...
|1> Test.passed
|.> True
Using the decorator in the less usual, but identical way...
class Test(object):
pass
Test = metaclass_wrapper(Test)
...yields, as expected, the same nice result.
|1> Test.passed
|.> True
The class has no __metaclass__ attribute set... because you never set it!
Which metaclass to use is normally determined by a name __metaclass__ set in a class block. The __metaclass__ attribute isn't set by the metaclass. So if you invoke a metaclass directly rather than setting __metaclass__ and letting Python figure it out, then no __metaclass__ attribute is set.
In fact, normal classes are all instances of the metaclass type, so if the metaclass always set the __metaclass__ attribute on its instances then every class would have a __metaclass__ attribute (most of them set to type).
I would not use your decorator approach. It obscures the fact that a metaclass is involved (and which one), is still one line of boilerplate, and it's just messy to create a class from the 3 defining features of (name, bases, attributes) only to pull those 3 bits back out from the resulting class, throw the class away, and make a new class from those same 3 bits!
When you do this in Python 2.x:
class A(object):
__metaclass__ = MyMeta
def __init__(self):
pass
You'd get roughly the same result if you'd written this:
attrs = {}
attrs['__metaclass__'] = MyMeta
def __init__(self):
pass
attrs['__init__'] = __init__
A = attrs.get('__metaclass__', type)('A', (object,), attrs)
In reality calculating the metaclass is more complicated, as there actually has to be a search through all the bases to determine whether there's a metaclass conflict, and if one of the bases doesn't have type as its metaclass and attrs doesn't contain __metaclass__ then the default metaclass is the ancestor's metaclass rather than type. This is one situation where I expect your decorator "solution" will differ from using __metaclass__ directly. I'm not sure exactly what would happen if you used your decorator in a situation where using __metaclass__ would give you a metaclass conflict error, but I wouldn't expect it to be pleasant.
Also, if there are any other metaclasses involved, your method would result in them running first (possibly modifying what the name, bases, and attributes are!) and then pulling those out of the class and using it to create a new class. This could potentially be quite different than what you'd get using __metaclass__.
As for the __dict__ not giving you a real dictionary, that's just an implementation detail; I would guess for performance reasons. I doubt there is any spec that says the __dict__ of a (non-class) instance has to be the same type as the __dict__ of a class (which is also an instance btw; just an instance of a metaclass). The __dict__ attribute of a class is a "dictproxy", which allows you to look up attribute keys as if it were a dict but still isn't a dict. type is picky about the type of its third argument; it wants a real dict, not just a "dict-like" object (shame on it for spoiling duck-typing). It's not a 2.x vs 3.x thing; Python 3 behaves the same way, although it gives you a nicer string representation of the dictproxy. Python 2.4 (which is the oldest 2.x I have readily available) also has dictproxy objects for class __dict__ objects.
My summary of your question: "I tried a new tricky way to do a thing, and it didn't quite work. Should I use the simple way instead?"
Yes, you should do it the simple way. You haven't said why you're interested in inventing a new way to do it.